pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
translation | transformers |
### opus-mt-pon-en
* source languages: pon
* target languages: en
* OPUS readme: [pon-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.en | 34.1 | 0.489 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pon-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pon",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pon #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-pon-en
* source languages: pon
* target languages: en
* OPUS readme: pon-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.1, chr-F: 0.489
| [
"### opus-mt-pon-en\n\n\n* source languages: pon\n* target languages: en\n* OPUS readme: pon-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.1, chr-F: 0.489"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-pon-en\n\n\n* source languages: pon\n* target languages: en\n* OPUS readme: pon-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.1, chr-F: 0.489"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pon-en\n\n\n* source languages: pon\n* target languages: en\n* OPUS readme: pon-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.1, chr-F: 0.489"
] |
translation | transformers |
### opus-mt-pon-es
* source languages: pon
* target languages: es
* OPUS readme: [pon-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.es | 22.4 | 0.402 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pon-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pon",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pon #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-pon-es
* source languages: pon
* target languages: es
* OPUS readme: pon-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.4, chr-F: 0.402
| [
"### opus-mt-pon-es\n\n\n* source languages: pon\n* target languages: es\n* OPUS readme: pon-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.402"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-pon-es\n\n\n* source languages: pon\n* target languages: es\n* OPUS readme: pon-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.402"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pon-es\n\n\n* source languages: pon\n* target languages: es\n* OPUS readme: pon-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.402"
] |
translation | transformers |
### opus-mt-pon-fi
* source languages: pon
* target languages: fi
* OPUS readme: [pon-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.fi | 22.2 | 0.434 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pon-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pon",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pon #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-pon-fi
* source languages: pon
* target languages: fi
* OPUS readme: pon-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.2, chr-F: 0.434
| [
"### opus-mt-pon-fi\n\n\n* source languages: pon\n* target languages: fi\n* OPUS readme: pon-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.2, chr-F: 0.434"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-pon-fi\n\n\n* source languages: pon\n* target languages: fi\n* OPUS readme: pon-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.2, chr-F: 0.434"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pon-fi\n\n\n* source languages: pon\n* target languages: fi\n* OPUS readme: pon-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.2, chr-F: 0.434"
] |
translation | transformers |
### opus-mt-pon-fr
* source languages: pon
* target languages: fr
* OPUS readme: [pon-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.fr | 24.4 | 0.410 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pon-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pon",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pon #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-pon-fr
* source languages: pon
* target languages: fr
* OPUS readme: pon-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.4, chr-F: 0.410
| [
"### opus-mt-pon-fr\n\n\n* source languages: pon\n* target languages: fr\n* OPUS readme: pon-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.410"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-pon-fr\n\n\n* source languages: pon\n* target languages: fr\n* OPUS readme: pon-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.410"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pon-fr\n\n\n* source languages: pon\n* target languages: fr\n* OPUS readme: pon-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.410"
] |
translation | transformers |
### opus-mt-pon-sv
* source languages: pon
* target languages: sv
* OPUS readme: [pon-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pon-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pon-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pon.sv | 26.4 | 0.436 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pon-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pon",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pon #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-pon-sv
* source languages: pon
* target languages: sv
* OPUS readme: pon-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.4, chr-F: 0.436
| [
"### opus-mt-pon-sv\n\n\n* source languages: pon\n* target languages: sv\n* OPUS readme: pon-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.436"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-pon-sv\n\n\n* source languages: pon\n* target languages: sv\n* OPUS readme: pon-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.436"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pon #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pon-sv\n\n\n* source languages: pon\n* target languages: sv\n* OPUS readme: pon-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.436"
] |
translation | transformers |
### pqe-eng
* source group: Eastern Malayo-Polynesian languages
* target group: English
* OPUS readme: [pqe-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pqe-eng/README.md)
* model: transformer
* source language(s): fij gil haw mah mri nau niu rap smo tah ton tvl
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-28.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.zip)
* test set translations: [opus-2020-06-28.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.test.txt)
* test set scores: [opus-2020-06-28.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fij-eng.fij.eng | 26.9 | 0.361 |
| Tatoeba-test.gil-eng.gil.eng | 49.0 | 0.618 |
| Tatoeba-test.haw-eng.haw.eng | 1.6 | 0.126 |
| Tatoeba-test.mah-eng.mah.eng | 13.7 | 0.257 |
| Tatoeba-test.mri-eng.mri.eng | 7.4 | 0.250 |
| Tatoeba-test.multi.eng | 12.6 | 0.268 |
| Tatoeba-test.nau-eng.nau.eng | 2.3 | 0.125 |
| Tatoeba-test.niu-eng.niu.eng | 34.4 | 0.471 |
| Tatoeba-test.rap-eng.rap.eng | 10.3 | 0.215 |
| Tatoeba-test.smo-eng.smo.eng | 28.5 | 0.413 |
| Tatoeba-test.tah-eng.tah.eng | 12.1 | 0.199 |
| Tatoeba-test.ton-eng.ton.eng | 41.8 | 0.517 |
| Tatoeba-test.tvl-eng.tvl.eng | 42.9 | 0.540 |
### System Info:
- hf_name: pqe-eng
- source_languages: pqe
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pqe-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe', 'en']
- src_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pqe-eng/opus-2020-06-28.test.txt
- src_alpha3: pqe
- tgt_alpha3: eng
- short_pair: pqe-en
- chrF2_score: 0.268
- bleu: 12.6
- brevity_penalty: 1.0
- ref_len: 4568.0
- src_name: Eastern Malayo-Polynesian languages
- tgt_name: English
- train_date: 2020-06-28
- src_alpha2: pqe
- tgt_alpha2: en
- prefer_old: False
- long_pair: pqe-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["fj", "mi", "ty", "to", "na", "sm", "mh", "pqe", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pqe-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fj",
"mi",
"ty",
"to",
"na",
"sm",
"mh",
"pqe",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fj",
"mi",
"ty",
"to",
"na",
"sm",
"mh",
"pqe",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fj #mi #ty #to #na #sm #mh #pqe #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### pqe-eng
* source group: Eastern Malayo-Polynesian languages
* target group: English
* OPUS readme: pqe-eng
* model: transformer
* source language(s): fij gil haw mah mri nau niu rap smo tah ton tvl
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.9, chr-F: 0.361
testset: URL, BLEU: 49.0, chr-F: 0.618
testset: URL, BLEU: 1.6, chr-F: 0.126
testset: URL, BLEU: 13.7, chr-F: 0.257
testset: URL, BLEU: 7.4, chr-F: 0.250
testset: URL, BLEU: 12.6, chr-F: 0.268
testset: URL, BLEU: 2.3, chr-F: 0.125
testset: URL, BLEU: 34.4, chr-F: 0.471
testset: URL, BLEU: 10.3, chr-F: 0.215
testset: URL, BLEU: 28.5, chr-F: 0.413
testset: URL, BLEU: 12.1, chr-F: 0.199
testset: URL, BLEU: 41.8, chr-F: 0.517
testset: URL, BLEU: 42.9, chr-F: 0.540
### System Info:
* hf\_name: pqe-eng
* source\_languages: pqe
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe', 'en']
* src\_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: pqe
* tgt\_alpha3: eng
* short\_pair: pqe-en
* chrF2\_score: 0.268
* bleu: 12.6
* brevity\_penalty: 1.0
* ref\_len: 4568.0
* src\_name: Eastern Malayo-Polynesian languages
* tgt\_name: English
* train\_date: 2020-06-28
* src\_alpha2: pqe
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: pqe-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### pqe-eng\n\n\n* source group: Eastern Malayo-Polynesian languages\n* target group: English\n* OPUS readme: pqe-eng\n* model: transformer\n* source language(s): fij gil haw mah mri nau niu rap smo tah ton tvl\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.361\ntestset: URL, BLEU: 49.0, chr-F: 0.618\ntestset: URL, BLEU: 1.6, chr-F: 0.126\ntestset: URL, BLEU: 13.7, chr-F: 0.257\ntestset: URL, BLEU: 7.4, chr-F: 0.250\ntestset: URL, BLEU: 12.6, chr-F: 0.268\ntestset: URL, BLEU: 2.3, chr-F: 0.125\ntestset: URL, BLEU: 34.4, chr-F: 0.471\ntestset: URL, BLEU: 10.3, chr-F: 0.215\ntestset: URL, BLEU: 28.5, chr-F: 0.413\ntestset: URL, BLEU: 12.1, chr-F: 0.199\ntestset: URL, BLEU: 41.8, chr-F: 0.517\ntestset: URL, BLEU: 42.9, chr-F: 0.540",
"### System Info:\n\n\n* hf\\_name: pqe-eng\n* source\\_languages: pqe\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe', 'en']\n* src\\_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pqe\n* tgt\\_alpha3: eng\n* short\\_pair: pqe-en\n* chrF2\\_score: 0.268\n* bleu: 12.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 4568.0\n* src\\_name: Eastern Malayo-Polynesian languages\n* tgt\\_name: English\n* train\\_date: 2020-06-28\n* src\\_alpha2: pqe\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: pqe-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fj #mi #ty #to #na #sm #mh #pqe #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### pqe-eng\n\n\n* source group: Eastern Malayo-Polynesian languages\n* target group: English\n* OPUS readme: pqe-eng\n* model: transformer\n* source language(s): fij gil haw mah mri nau niu rap smo tah ton tvl\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.361\ntestset: URL, BLEU: 49.0, chr-F: 0.618\ntestset: URL, BLEU: 1.6, chr-F: 0.126\ntestset: URL, BLEU: 13.7, chr-F: 0.257\ntestset: URL, BLEU: 7.4, chr-F: 0.250\ntestset: URL, BLEU: 12.6, chr-F: 0.268\ntestset: URL, BLEU: 2.3, chr-F: 0.125\ntestset: URL, BLEU: 34.4, chr-F: 0.471\ntestset: URL, BLEU: 10.3, chr-F: 0.215\ntestset: URL, BLEU: 28.5, chr-F: 0.413\ntestset: URL, BLEU: 12.1, chr-F: 0.199\ntestset: URL, BLEU: 41.8, chr-F: 0.517\ntestset: URL, BLEU: 42.9, chr-F: 0.540",
"### System Info:\n\n\n* hf\\_name: pqe-eng\n* source\\_languages: pqe\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe', 'en']\n* src\\_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pqe\n* tgt\\_alpha3: eng\n* short\\_pair: pqe-en\n* chrF2\\_score: 0.268\n* bleu: 12.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 4568.0\n* src\\_name: Eastern Malayo-Polynesian languages\n* tgt\\_name: English\n* train\\_date: 2020-06-28\n* src\\_alpha2: pqe\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: pqe-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
69,
424,
492
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fj #mi #ty #to #na #sm #mh #pqe #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### pqe-eng\n\n\n* source group: Eastern Malayo-Polynesian languages\n* target group: English\n* OPUS readme: pqe-eng\n* model: transformer\n* source language(s): fij gil haw mah mri nau niu rap smo tah ton tvl\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.361\ntestset: URL, BLEU: 49.0, chr-F: 0.618\ntestset: URL, BLEU: 1.6, chr-F: 0.126\ntestset: URL, BLEU: 13.7, chr-F: 0.257\ntestset: URL, BLEU: 7.4, chr-F: 0.250\ntestset: URL, BLEU: 12.6, chr-F: 0.268\ntestset: URL, BLEU: 2.3, chr-F: 0.125\ntestset: URL, BLEU: 34.4, chr-F: 0.471\ntestset: URL, BLEU: 10.3, chr-F: 0.215\ntestset: URL, BLEU: 28.5, chr-F: 0.413\ntestset: URL, BLEU: 12.1, chr-F: 0.199\ntestset: URL, BLEU: 41.8, chr-F: 0.517\ntestset: URL, BLEU: 42.9, chr-F: 0.540### System Info:\n\n\n* hf\\_name: pqe-eng\n* source\\_languages: pqe\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe', 'en']\n* src\\_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pqe\n* tgt\\_alpha3: eng\n* short\\_pair: pqe-en\n* chrF2\\_score: 0.268\n* bleu: 12.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 4568.0\n* src\\_name: Eastern Malayo-Polynesian languages\n* tgt\\_name: English\n* train\\_date: 2020-06-28\n* src\\_alpha2: pqe\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: pqe-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-prl-es
* source languages: prl
* target languages: es
* OPUS readme: [prl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/prl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/prl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.prl.es | 93.3 | 0.955 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-prl-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"prl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #prl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-prl-es
* source languages: prl
* target languages: es
* OPUS readme: prl-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 93.3, chr-F: 0.955
| [
"### opus-mt-prl-es\n\n\n* source languages: prl\n* target languages: es\n* OPUS readme: prl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 93.3, chr-F: 0.955"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #prl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-prl-es\n\n\n* source languages: prl\n* target languages: es\n* OPUS readme: prl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 93.3, chr-F: 0.955"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #prl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-prl-es\n\n\n* source languages: prl\n* target languages: es\n* OPUS readme: prl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 93.3, chr-F: 0.955"
] |
translation | transformers |
### por-cat
* source group: Portuguese
* target group: Catalan
* OPUS readme: [por-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-cat/README.md)
* model: transformer-align
* source language(s): por
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.cat | 45.7 | 0.672 |
### System Info:
- hf_name: por-cat
- source_languages: por
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'ca']
- src_constituents: {'por'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-cat/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: cat
- short_pair: pt-ca
- chrF2_score: 0.672
- bleu: 45.7
- brevity_penalty: 0.972
- ref_len: 5878.0
- src_name: Portuguese
- tgt_name: Catalan
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: ca
- prefer_old: False
- long_pair: por-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["pt", "ca"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pt-ca | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pt",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt",
"ca"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pt #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### por-cat
* source group: Portuguese
* target group: Catalan
* OPUS readme: por-cat
* model: transformer-align
* source language(s): por
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 45.7, chr-F: 0.672
### System Info:
* hf\_name: por-cat
* source\_languages: por
* target\_languages: cat
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['pt', 'ca']
* src\_constituents: {'por'}
* tgt\_constituents: {'cat'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: por
* tgt\_alpha3: cat
* short\_pair: pt-ca
* chrF2\_score: 0.672
* bleu: 45.7
* brevity\_penalty: 0.972
* ref\_len: 5878.0
* src\_name: Portuguese
* tgt\_name: Catalan
* train\_date: 2020-06-17
* src\_alpha2: pt
* tgt\_alpha2: ca
* prefer\_old: False
* long\_pair: por-cat
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### por-cat\n\n\n* source group: Portuguese\n* target group: Catalan\n* OPUS readme: por-cat\n* model: transformer-align\n* source language(s): por\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.7, chr-F: 0.672",
"### System Info:\n\n\n* hf\\_name: por-cat\n* source\\_languages: por\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'ca']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: cat\n* short\\_pair: pt-ca\n* chrF2\\_score: 0.672\n* bleu: 45.7\n* brevity\\_penalty: 0.972\n* ref\\_len: 5878.0\n* src\\_name: Portuguese\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-17\n* src\\_alpha2: pt\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: por-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### por-cat\n\n\n* source group: Portuguese\n* target group: Catalan\n* OPUS readme: por-cat\n* model: transformer-align\n* source language(s): por\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.7, chr-F: 0.672",
"### System Info:\n\n\n* hf\\_name: por-cat\n* source\\_languages: por\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'ca']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: cat\n* short\\_pair: pt-ca\n* chrF2\\_score: 0.672\n* bleu: 45.7\n* brevity\\_penalty: 0.972\n* ref\\_len: 5878.0\n* src\\_name: Portuguese\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-17\n* src\\_alpha2: pt\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: por-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
131,
392
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### por-cat\n\n\n* source group: Portuguese\n* target group: Catalan\n* OPUS readme: por-cat\n* model: transformer-align\n* source language(s): por\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.7, chr-F: 0.672### System Info:\n\n\n* hf\\_name: por-cat\n* source\\_languages: por\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'ca']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: cat\n* short\\_pair: pt-ca\n* chrF2\\_score: 0.672\n* bleu: 45.7\n* brevity\\_penalty: 0.972\n* ref\\_len: 5878.0\n* src\\_name: Portuguese\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-17\n* src\\_alpha2: pt\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: por-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### por-epo
* source group: Portuguese
* target group: Esperanto
* OPUS readme: [por-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-epo/README.md)
* model: transformer-align
* source language(s): por
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.epo | 26.8 | 0.497 |
### System Info:
- hf_name: por-epo
- source_languages: por
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'eo']
- src_constituents: {'por'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-epo/opus-2020-06-16.test.txt
- src_alpha3: por
- tgt_alpha3: epo
- short_pair: pt-eo
- chrF2_score: 0.49700000000000005
- bleu: 26.8
- brevity_penalty: 0.948
- ref_len: 87408.0
- src_name: Portuguese
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: pt
- tgt_alpha2: eo
- prefer_old: False
- long_pair: por-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["pt", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pt-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pt",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pt #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### por-epo
* source group: Portuguese
* target group: Esperanto
* OPUS readme: por-epo
* model: transformer-align
* source language(s): por
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.8, chr-F: 0.497
### System Info:
* hf\_name: por-epo
* source\_languages: por
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['pt', 'eo']
* src\_constituents: {'por'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: por
* tgt\_alpha3: epo
* short\_pair: pt-eo
* chrF2\_score: 0.49700000000000005
* bleu: 26.8
* brevity\_penalty: 0.948
* ref\_len: 87408.0
* src\_name: Portuguese
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: pt
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: por-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### por-epo\n\n\n* source group: Portuguese\n* target group: Esperanto\n* OPUS readme: por-epo\n* model: transformer-align\n* source language(s): por\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.8, chr-F: 0.497",
"### System Info:\n\n\n* hf\\_name: por-epo\n* source\\_languages: por\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'eo']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: epo\n* short\\_pair: pt-eo\n* chrF2\\_score: 0.49700000000000005\n* bleu: 26.8\n* brevity\\_penalty: 0.948\n* ref\\_len: 87408.0\n* src\\_name: Portuguese\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: pt\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: por-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### por-epo\n\n\n* source group: Portuguese\n* target group: Esperanto\n* OPUS readme: por-epo\n* model: transformer-align\n* source language(s): por\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.8, chr-F: 0.497",
"### System Info:\n\n\n* hf\\_name: por-epo\n* source\\_languages: por\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'eo']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: epo\n* short\\_pair: pt-eo\n* chrF2\\_score: 0.49700000000000005\n* bleu: 26.8\n* brevity\\_penalty: 0.948\n* ref\\_len: 87408.0\n* src\\_name: Portuguese\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: pt\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: por-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
136,
409
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### por-epo\n\n\n* source group: Portuguese\n* target group: Esperanto\n* OPUS readme: por-epo\n* model: transformer-align\n* source language(s): por\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.8, chr-F: 0.497### System Info:\n\n\n* hf\\_name: por-epo\n* source\\_languages: por\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'eo']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: epo\n* short\\_pair: pt-eo\n* chrF2\\_score: 0.49700000000000005\n* bleu: 26.8\n* brevity\\_penalty: 0.948\n* ref\\_len: 87408.0\n* src\\_name: Portuguese\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: pt\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: por-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### por-glg
* source group: Portuguese
* target group: Galician
* OPUS readme: [por-glg](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-glg/README.md)
* model: transformer-align
* source language(s): por
* target language(s): glg
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.glg | 55.8 | 0.737 |
### System Info:
- hf_name: por-glg
- source_languages: por
- target_languages: glg
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-glg/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'gl']
- src_constituents: {'por'}
- tgt_constituents: {'glg'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-glg/opus-2020-06-16.test.txt
- src_alpha3: por
- tgt_alpha3: glg
- short_pair: pt-gl
- chrF2_score: 0.737
- bleu: 55.8
- brevity_penalty: 0.996
- ref_len: 2989.0
- src_name: Portuguese
- tgt_name: Galician
- train_date: 2020-06-16
- src_alpha2: pt
- tgt_alpha2: gl
- prefer_old: False
- long_pair: por-glg
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["pt", "gl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pt-gl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pt",
"gl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt",
"gl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pt #gl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### por-glg
* source group: Portuguese
* target group: Galician
* OPUS readme: por-glg
* model: transformer-align
* source language(s): por
* target language(s): glg
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 55.8, chr-F: 0.737
### System Info:
* hf\_name: por-glg
* source\_languages: por
* target\_languages: glg
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['pt', 'gl']
* src\_constituents: {'por'}
* tgt\_constituents: {'glg'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: por
* tgt\_alpha3: glg
* short\_pair: pt-gl
* chrF2\_score: 0.737
* bleu: 55.8
* brevity\_penalty: 0.996
* ref\_len: 2989.0
* src\_name: Portuguese
* tgt\_name: Galician
* train\_date: 2020-06-16
* src\_alpha2: pt
* tgt\_alpha2: gl
* prefer\_old: False
* long\_pair: por-glg
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### por-glg\n\n\n* source group: Portuguese\n* target group: Galician\n* OPUS readme: por-glg\n* model: transformer-align\n* source language(s): por\n* target language(s): glg\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.8, chr-F: 0.737",
"### System Info:\n\n\n* hf\\_name: por-glg\n* source\\_languages: por\n* target\\_languages: glg\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'gl']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'glg'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: glg\n* short\\_pair: pt-gl\n* chrF2\\_score: 0.737\n* bleu: 55.8\n* brevity\\_penalty: 0.996\n* ref\\_len: 2989.0\n* src\\_name: Portuguese\n* tgt\\_name: Galician\n* train\\_date: 2020-06-16\n* src\\_alpha2: pt\n* tgt\\_alpha2: gl\n* prefer\\_old: False\n* long\\_pair: por-glg\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #gl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### por-glg\n\n\n* source group: Portuguese\n* target group: Galician\n* OPUS readme: por-glg\n* model: transformer-align\n* source language(s): por\n* target language(s): glg\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.8, chr-F: 0.737",
"### System Info:\n\n\n* hf\\_name: por-glg\n* source\\_languages: por\n* target\\_languages: glg\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'gl']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'glg'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: glg\n* short\\_pair: pt-gl\n* chrF2\\_score: 0.737\n* bleu: 55.8\n* brevity\\_penalty: 0.996\n* ref\\_len: 2989.0\n* src\\_name: Portuguese\n* tgt\\_name: Galician\n* train\\_date: 2020-06-16\n* src\\_alpha2: pt\n* tgt\\_alpha2: gl\n* prefer\\_old: False\n* long\\_pair: por-glg\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
136,
403
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #gl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### por-glg\n\n\n* source group: Portuguese\n* target group: Galician\n* OPUS readme: por-glg\n* model: transformer-align\n* source language(s): por\n* target language(s): glg\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.8, chr-F: 0.737### System Info:\n\n\n* hf\\_name: por-glg\n* source\\_languages: por\n* target\\_languages: glg\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'gl']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'glg'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: glg\n* short\\_pair: pt-gl\n* chrF2\\_score: 0.737\n* bleu: 55.8\n* brevity\\_penalty: 0.996\n* ref\\_len: 2989.0\n* src\\_name: Portuguese\n* tgt\\_name: Galician\n* train\\_date: 2020-06-16\n* src\\_alpha2: pt\n* tgt\\_alpha2: gl\n* prefer\\_old: False\n* long\\_pair: por-glg\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### por-tgl
* source group: Portuguese
* target group: Tagalog
* OPUS readme: [por-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md)
* model: transformer-align
* source language(s): por
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.tgl | 28.4 | 0.565 |
### System Info:
- hf_name: por-tgl
- source_languages: por
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'tl']
- src_constituents: {'por'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: tgl
- short_pair: pt-tl
- chrF2_score: 0.565
- bleu: 28.4
- brevity_penalty: 1.0
- ref_len: 13620.0
- src_name: Portuguese
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: tl
- prefer_old: False
- long_pair: por-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["pt", "tl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pt-tl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pt",
"tl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt",
"tl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pt #tl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### por-tgl
* source group: Portuguese
* target group: Tagalog
* OPUS readme: por-tgl
* model: transformer-align
* source language(s): por
* target language(s): tgl\_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.4, chr-F: 0.565
### System Info:
* hf\_name: por-tgl
* source\_languages: por
* target\_languages: tgl
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['pt', 'tl']
* src\_constituents: {'por'}
* tgt\_constituents: {'tgl\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: por
* tgt\_alpha3: tgl
* short\_pair: pt-tl
* chrF2\_score: 0.565
* bleu: 28.4
* brevity\_penalty: 1.0
* ref\_len: 13620.0
* src\_name: Portuguese
* tgt\_name: Tagalog
* train\_date: 2020-06-17
* src\_alpha2: pt
* tgt\_alpha2: tl
* prefer\_old: False
* long\_pair: por-tgl
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### por-tgl\n\n\n* source group: Portuguese\n* target group: Tagalog\n* OPUS readme: por-tgl\n* model: transformer-align\n* source language(s): por\n* target language(s): tgl\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.565",
"### System Info:\n\n\n* hf\\_name: por-tgl\n* source\\_languages: por\n* target\\_languages: tgl\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'tl']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'tgl\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: tgl\n* short\\_pair: pt-tl\n* chrF2\\_score: 0.565\n* bleu: 28.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 13620.0\n* src\\_name: Portuguese\n* tgt\\_name: Tagalog\n* train\\_date: 2020-06-17\n* src\\_alpha2: pt\n* tgt\\_alpha2: tl\n* prefer\\_old: False\n* long\\_pair: por-tgl\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #tl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### por-tgl\n\n\n* source group: Portuguese\n* target group: Tagalog\n* OPUS readme: por-tgl\n* model: transformer-align\n* source language(s): por\n* target language(s): tgl\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.565",
"### System Info:\n\n\n* hf\\_name: por-tgl\n* source\\_languages: por\n* target\\_languages: tgl\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'tl']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'tgl\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: tgl\n* short\\_pair: pt-tl\n* chrF2\\_score: 0.565\n* bleu: 28.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 13620.0\n* src\\_name: Portuguese\n* tgt\\_name: Tagalog\n* train\\_date: 2020-06-17\n* src\\_alpha2: pt\n* tgt\\_alpha2: tl\n* prefer\\_old: False\n* long\\_pair: por-tgl\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
141,
405
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #tl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### por-tgl\n\n\n* source group: Portuguese\n* target group: Tagalog\n* OPUS readme: por-tgl\n* model: transformer-align\n* source language(s): por\n* target language(s): tgl\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.565### System Info:\n\n\n* hf\\_name: por-tgl\n* source\\_languages: por\n* target\\_languages: tgl\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'tl']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'tgl\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: tgl\n* short\\_pair: pt-tl\n* chrF2\\_score: 0.565\n* bleu: 28.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 13620.0\n* src\\_name: Portuguese\n* tgt\\_name: Tagalog\n* train\\_date: 2020-06-17\n* src\\_alpha2: pt\n* tgt\\_alpha2: tl\n* prefer\\_old: False\n* long\\_pair: por-tgl\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### por-ukr
* source group: Portuguese
* target group: Ukrainian
* OPUS readme: [por-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-ukr/README.md)
* model: transformer-align
* source language(s): por
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.por.ukr | 39.8 | 0.616 |
### System Info:
- hf_name: por-ukr
- source_languages: por
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pt', 'uk']
- src_constituents: {'por'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-ukr/opus-2020-06-17.test.txt
- src_alpha3: por
- tgt_alpha3: ukr
- short_pair: pt-uk
- chrF2_score: 0.616
- bleu: 39.8
- brevity_penalty: 0.9990000000000001
- ref_len: 18933.0
- src_name: Portuguese
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: pt
- tgt_alpha2: uk
- prefer_old: False
- long_pair: por-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["pt", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-pt-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pt",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt",
"uk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pt #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### por-ukr
* source group: Portuguese
* target group: Ukrainian
* OPUS readme: por-ukr
* model: transformer-align
* source language(s): por
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.8, chr-F: 0.616
### System Info:
* hf\_name: por-ukr
* source\_languages: por
* target\_languages: ukr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['pt', 'uk']
* src\_constituents: {'por'}
* tgt\_constituents: {'ukr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: por
* tgt\_alpha3: ukr
* short\_pair: pt-uk
* chrF2\_score: 0.616
* bleu: 39.8
* brevity\_penalty: 0.9990000000000001
* ref\_len: 18933.0
* src\_name: Portuguese
* tgt\_name: Ukrainian
* train\_date: 2020-06-17
* src\_alpha2: pt
* tgt\_alpha2: uk
* prefer\_old: False
* long\_pair: por-ukr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### por-ukr\n\n\n* source group: Portuguese\n* target group: Ukrainian\n* OPUS readme: por-ukr\n* model: transformer-align\n* source language(s): por\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.8, chr-F: 0.616",
"### System Info:\n\n\n* hf\\_name: por-ukr\n* source\\_languages: por\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'uk']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: ukr\n* short\\_pair: pt-uk\n* chrF2\\_score: 0.616\n* bleu: 39.8\n* brevity\\_penalty: 0.9990000000000001\n* ref\\_len: 18933.0\n* src\\_name: Portuguese\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pt\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: por-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### por-ukr\n\n\n* source group: Portuguese\n* target group: Ukrainian\n* OPUS readme: por-ukr\n* model: transformer-align\n* source language(s): por\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.8, chr-F: 0.616",
"### System Info:\n\n\n* hf\\_name: por-ukr\n* source\\_languages: por\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'uk']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: ukr\n* short\\_pair: pt-uk\n* chrF2\\_score: 0.616\n* bleu: 39.8\n* brevity\\_penalty: 0.9990000000000001\n* ref\\_len: 18933.0\n* src\\_name: Portuguese\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pt\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: por-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
402
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pt #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### por-ukr\n\n\n* source group: Portuguese\n* target group: Ukrainian\n* OPUS readme: por-ukr\n* model: transformer-align\n* source language(s): por\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.8, chr-F: 0.616### System Info:\n\n\n* hf\\_name: por-ukr\n* source\\_languages: por\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pt', 'uk']\n* src\\_constituents: {'por'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: por\n* tgt\\_alpha3: ukr\n* short\\_pair: pt-uk\n* chrF2\\_score: 0.616\n* bleu: 39.8\n* brevity\\_penalty: 0.9990000000000001\n* ref\\_len: 18933.0\n* src\\_name: Portuguese\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pt\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: por-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### run-deu
* source group: Rundi
* target group: German
* OPUS readme: [run-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-deu/README.md)
* model: transformer-align
* source language(s): run
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.deu | 17.1 | 0.344 |
### System Info:
- hf_name: run-deu
- source_languages: run
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'de']
- src_constituents: {'run'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-deu/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: deu
- short_pair: rn-de
- chrF2_score: 0.344
- bleu: 17.1
- brevity_penalty: 0.961
- ref_len: 10562.0
- src_name: Rundi
- tgt_name: German
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: de
- prefer_old: False
- long_pair: run-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["rn", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rn-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rn",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"rn",
"de"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rn #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### run-deu
* source group: Rundi
* target group: German
* OPUS readme: run-deu
* model: transformer-align
* source language(s): run
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 17.1, chr-F: 0.344
### System Info:
* hf\_name: run-deu
* source\_languages: run
* target\_languages: deu
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['rn', 'de']
* src\_constituents: {'run'}
* tgt\_constituents: {'deu'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: run
* tgt\_alpha3: deu
* short\_pair: rn-de
* chrF2\_score: 0.344
* bleu: 17.1
* brevity\_penalty: 0.961
* ref\_len: 10562.0
* src\_name: Rundi
* tgt\_name: German
* train\_date: 2020-06-16
* src\_alpha2: rn
* tgt\_alpha2: de
* prefer\_old: False
* long\_pair: run-deu
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### run-deu\n\n\n* source group: Rundi\n* target group: German\n* OPUS readme: run-deu\n* model: transformer-align\n* source language(s): run\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.1, chr-F: 0.344",
"### System Info:\n\n\n* hf\\_name: run-deu\n* source\\_languages: run\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'de']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: deu\n* short\\_pair: rn-de\n* chrF2\\_score: 0.344\n* bleu: 17.1\n* brevity\\_penalty: 0.961\n* ref\\_len: 10562.0\n* src\\_name: Rundi\n* tgt\\_name: German\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: run-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rn #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### run-deu\n\n\n* source group: Rundi\n* target group: German\n* OPUS readme: run-deu\n* model: transformer-align\n* source language(s): run\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.1, chr-F: 0.344",
"### System Info:\n\n\n* hf\\_name: run-deu\n* source\\_languages: run\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'de']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: deu\n* short\\_pair: rn-de\n* chrF2\\_score: 0.344\n* bleu: 17.1\n* brevity\\_penalty: 0.961\n* ref\\_len: 10562.0\n* src\\_name: Rundi\n* tgt\\_name: German\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: run-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
397
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rn #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### run-deu\n\n\n* source group: Rundi\n* target group: German\n* OPUS readme: run-deu\n* model: transformer-align\n* source language(s): run\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.1, chr-F: 0.344### System Info:\n\n\n* hf\\_name: run-deu\n* source\\_languages: run\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'de']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: deu\n* short\\_pair: rn-de\n* chrF2\\_score: 0.344\n* bleu: 17.1\n* brevity\\_penalty: 0.961\n* ref\\_len: 10562.0\n* src\\_name: Rundi\n* tgt\\_name: German\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: run-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### run-eng
* source group: Rundi
* target group: English
* OPUS readme: [run-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-eng/README.md)
* model: transformer-align
* source language(s): run
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.eng | 26.7 | 0.428 |
### System Info:
- hf_name: run-eng
- source_languages: run
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'en']
- src_constituents: {'run'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-eng/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: eng
- short_pair: rn-en
- chrF2_score: 0.428
- bleu: 26.7
- brevity_penalty: 0.99
- ref_len: 10041.0
- src_name: Rundi
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: en
- prefer_old: False
- long_pair: run-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["rn", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rn-en | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"rn",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"rn",
"en"
] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #rn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### run-eng
* source group: Rundi
* target group: English
* OPUS readme: run-eng
* model: transformer-align
* source language(s): run
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.7, chr-F: 0.428
### System Info:
* hf\_name: run-eng
* source\_languages: run
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['rn', 'en']
* src\_constituents: {'run'}
* tgt\_constituents: {'eng'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: run
* tgt\_alpha3: eng
* short\_pair: rn-en
* chrF2\_score: 0.428
* bleu: 26.7
* brevity\_penalty: 0.99
* ref\_len: 10041.0
* src\_name: Rundi
* tgt\_name: English
* train\_date: 2020-06-16
* src\_alpha2: rn
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: run-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### run-eng\n\n\n* source group: Rundi\n* target group: English\n* OPUS readme: run-eng\n* model: transformer-align\n* source language(s): run\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.428",
"### System Info:\n\n\n* hf\\_name: run-eng\n* source\\_languages: run\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'en']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: eng\n* short\\_pair: rn-en\n* chrF2\\_score: 0.428\n* bleu: 26.7\n* brevity\\_penalty: 0.99\n* ref\\_len: 10041.0\n* src\\_name: Rundi\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: run-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #rn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### run-eng\n\n\n* source group: Rundi\n* target group: English\n* OPUS readme: run-eng\n* model: transformer-align\n* source language(s): run\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.428",
"### System Info:\n\n\n* hf\\_name: run-eng\n* source\\_languages: run\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'en']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: eng\n* short\\_pair: rn-en\n* chrF2\\_score: 0.428\n* bleu: 26.7\n* brevity\\_penalty: 0.99\n* ref\\_len: 10041.0\n* src\\_name: Rundi\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: run-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
55,
132,
391
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #rn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### run-eng\n\n\n* source group: Rundi\n* target group: English\n* OPUS readme: run-eng\n* model: transformer-align\n* source language(s): run\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.428### System Info:\n\n\n* hf\\_name: run-eng\n* source\\_languages: run\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'en']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: eng\n* short\\_pair: rn-en\n* chrF2\\_score: 0.428\n* bleu: 26.7\n* brevity\\_penalty: 0.99\n* ref\\_len: 10041.0\n* src\\_name: Rundi\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: run-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### run-spa
* source group: Rundi
* target group: Spanish
* OPUS readme: [run-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-spa/README.md)
* model: transformer-align
* source language(s): run
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.spa | 14.4 | 0.376 |
### System Info:
- hf_name: run-spa
- source_languages: run
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'es']
- src_constituents: {'run'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-spa/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: spa
- short_pair: rn-es
- chrF2_score: 0.376
- bleu: 14.4
- brevity_penalty: 1.0
- ref_len: 5167.0
- src_name: Rundi
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: es
- prefer_old: False
- long_pair: run-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["rn", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rn-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rn",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"rn",
"es"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### run-spa
* source group: Rundi
* target group: Spanish
* OPUS readme: run-spa
* model: transformer-align
* source language(s): run
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 14.4, chr-F: 0.376
### System Info:
* hf\_name: run-spa
* source\_languages: run
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['rn', 'es']
* src\_constituents: {'run'}
* tgt\_constituents: {'spa'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: run
* tgt\_alpha3: spa
* short\_pair: rn-es
* chrF2\_score: 0.376
* bleu: 14.4
* brevity\_penalty: 1.0
* ref\_len: 5167.0
* src\_name: Rundi
* tgt\_name: Spanish
* train\_date: 2020-06-16
* src\_alpha2: rn
* tgt\_alpha2: es
* prefer\_old: False
* long\_pair: run-spa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### run-spa\n\n\n* source group: Rundi\n* target group: Spanish\n* OPUS readme: run-spa\n* model: transformer-align\n* source language(s): run\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.4, chr-F: 0.376",
"### System Info:\n\n\n* hf\\_name: run-spa\n* source\\_languages: run\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'es']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: spa\n* short\\_pair: rn-es\n* chrF2\\_score: 0.376\n* bleu: 14.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 5167.0\n* src\\_name: Rundi\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: run-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### run-spa\n\n\n* source group: Rundi\n* target group: Spanish\n* OPUS readme: run-spa\n* model: transformer-align\n* source language(s): run\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.4, chr-F: 0.376",
"### System Info:\n\n\n* hf\\_name: run-spa\n* source\\_languages: run\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'es']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: spa\n* short\\_pair: rn-es\n* chrF2\\_score: 0.376\n* bleu: 14.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 5167.0\n* src\\_name: Rundi\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: run-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
132,
392
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### run-spa\n\n\n* source group: Rundi\n* target group: Spanish\n* OPUS readme: run-spa\n* model: transformer-align\n* source language(s): run\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.4, chr-F: 0.376### System Info:\n\n\n* hf\\_name: run-spa\n* source\\_languages: run\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'es']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: spa\n* short\\_pair: rn-es\n* chrF2\\_score: 0.376\n* bleu: 14.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 5167.0\n* src\\_name: Rundi\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: run-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### run-fra
* source group: Rundi
* target group: French
* OPUS readme: [run-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md)
* model: transformer-align
* source language(s): run
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.fra | 18.2 | 0.397 |
### System Info:
- hf_name: run-fra
- source_languages: run
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'fr']
- src_constituents: {'run'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-fra/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: fra
- short_pair: rn-fr
- chrF2_score: 0.397
- bleu: 18.2
- brevity_penalty: 1.0
- ref_len: 7496.0
- src_name: Rundi
- tgt_name: French
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: fr
- prefer_old: False
- long_pair: run-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["rn", "fr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rn-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rn",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"rn",
"fr"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### run-fra
* source group: Rundi
* target group: French
* OPUS readme: run-fra
* model: transformer-align
* source language(s): run
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 18.2, chr-F: 0.397
### System Info:
* hf\_name: run-fra
* source\_languages: run
* target\_languages: fra
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['rn', 'fr']
* src\_constituents: {'run'}
* tgt\_constituents: {'fra'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: run
* tgt\_alpha3: fra
* short\_pair: rn-fr
* chrF2\_score: 0.397
* bleu: 18.2
* brevity\_penalty: 1.0
* ref\_len: 7496.0
* src\_name: Rundi
* tgt\_name: French
* train\_date: 2020-06-16
* src\_alpha2: rn
* tgt\_alpha2: fr
* prefer\_old: False
* long\_pair: run-fra
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### run-fra\n\n\n* source group: Rundi\n* target group: French\n* OPUS readme: run-fra\n* model: transformer-align\n* source language(s): run\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.2, chr-F: 0.397",
"### System Info:\n\n\n* hf\\_name: run-fra\n* source\\_languages: run\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'fr']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: fra\n* short\\_pair: rn-fr\n* chrF2\\_score: 0.397\n* bleu: 18.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 7496.0\n* src\\_name: Rundi\n* tgt\\_name: French\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: run-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### run-fra\n\n\n* source group: Rundi\n* target group: French\n* OPUS readme: run-fra\n* model: transformer-align\n* source language(s): run\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.2, chr-F: 0.397",
"### System Info:\n\n\n* hf\\_name: run-fra\n* source\\_languages: run\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'fr']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: fra\n* short\\_pair: rn-fr\n* chrF2\\_score: 0.397\n* bleu: 18.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 7496.0\n* src\\_name: Rundi\n* tgt\\_name: French\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: run-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
132,
392
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### run-fra\n\n\n* source group: Rundi\n* target group: French\n* OPUS readme: run-fra\n* model: transformer-align\n* source language(s): run\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.2, chr-F: 0.397### System Info:\n\n\n* hf\\_name: run-fra\n* source\\_languages: run\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'fr']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: fra\n* short\\_pair: rn-fr\n* chrF2\\_score: 0.397\n* bleu: 18.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 7496.0\n* src\\_name: Rundi\n* tgt\\_name: French\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: run-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### run-rus
* source group: Rundi
* target group: Russian
* OPUS readme: [run-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-rus/README.md)
* model: transformer-align
* source language(s): run
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.run.rus | 17.1 | 0.321 |
### System Info:
- hf_name: run-rus
- source_languages: run
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/run-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['rn', 'ru']
- src_constituents: {'run'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/run-rus/opus-2020-06-16.test.txt
- src_alpha3: run
- tgt_alpha3: rus
- short_pair: rn-ru
- chrF2_score: 0.321
- bleu: 17.1
- brevity_penalty: 1.0
- ref_len: 6635.0
- src_name: Rundi
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: rn
- tgt_alpha2: ru
- prefer_old: False
- long_pair: run-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["rn", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rn-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rn",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"rn",
"ru"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rn #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### run-rus
* source group: Rundi
* target group: Russian
* OPUS readme: run-rus
* model: transformer-align
* source language(s): run
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 17.1, chr-F: 0.321
### System Info:
* hf\_name: run-rus
* source\_languages: run
* target\_languages: rus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['rn', 'ru']
* src\_constituents: {'run'}
* tgt\_constituents: {'rus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: run
* tgt\_alpha3: rus
* short\_pair: rn-ru
* chrF2\_score: 0.321
* bleu: 17.1
* brevity\_penalty: 1.0
* ref\_len: 6635.0
* src\_name: Rundi
* tgt\_name: Russian
* train\_date: 2020-06-16
* src\_alpha2: rn
* tgt\_alpha2: ru
* prefer\_old: False
* long\_pair: run-rus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### run-rus\n\n\n* source group: Rundi\n* target group: Russian\n* OPUS readme: run-rus\n* model: transformer-align\n* source language(s): run\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.1, chr-F: 0.321",
"### System Info:\n\n\n* hf\\_name: run-rus\n* source\\_languages: run\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'ru']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: rus\n* short\\_pair: rn-ru\n* chrF2\\_score: 0.321\n* bleu: 17.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 6635.0\n* src\\_name: Rundi\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: run-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rn #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### run-rus\n\n\n* source group: Rundi\n* target group: Russian\n* OPUS readme: run-rus\n* model: transformer-align\n* source language(s): run\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.1, chr-F: 0.321",
"### System Info:\n\n\n* hf\\_name: run-rus\n* source\\_languages: run\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'ru']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: rus\n* short\\_pair: rn-ru\n* chrF2\\_score: 0.321\n* bleu: 17.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 6635.0\n* src\\_name: Rundi\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: run-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
131,
390
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rn #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### run-rus\n\n\n* source group: Rundi\n* target group: Russian\n* OPUS readme: run-rus\n* model: transformer-align\n* source language(s): run\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.1, chr-F: 0.321### System Info:\n\n\n* hf\\_name: run-rus\n* source\\_languages: run\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['rn', 'ru']\n* src\\_constituents: {'run'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: run\n* tgt\\_alpha3: rus\n* short\\_pair: rn-ru\n* chrF2\\_score: 0.321\n* bleu: 17.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 6635.0\n* src\\_name: Rundi\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: rn\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: run-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-rnd-en
* source languages: rnd
* target languages: en
* OPUS readme: [rnd-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.en | 37.8 | 0.531 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rnd-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rnd",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rnd #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-rnd-en
* source languages: rnd
* target languages: en
* OPUS readme: rnd-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.8, chr-F: 0.531
| [
"### opus-mt-rnd-en\n\n\n* source languages: rnd\n* target languages: en\n* OPUS readme: rnd-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.8, chr-F: 0.531"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rnd #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-rnd-en\n\n\n* source languages: rnd\n* target languages: en\n* OPUS readme: rnd-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.8, chr-F: 0.531"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rnd #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-rnd-en\n\n\n* source languages: rnd\n* target languages: en\n* OPUS readme: rnd-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.8, chr-F: 0.531"
] |
translation | transformers |
### opus-mt-rnd-fr
* source languages: rnd
* target languages: fr
* OPUS readme: [rnd-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.fr | 22.1 | 0.392 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rnd-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rnd",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rnd #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-rnd-fr
* source languages: rnd
* target languages: fr
* OPUS readme: rnd-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.1, chr-F: 0.392
| [
"### opus-mt-rnd-fr\n\n\n* source languages: rnd\n* target languages: fr\n* OPUS readme: rnd-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.392"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rnd #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-rnd-fr\n\n\n* source languages: rnd\n* target languages: fr\n* OPUS readme: rnd-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.392"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rnd #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-rnd-fr\n\n\n* source languages: rnd\n* target languages: fr\n* OPUS readme: rnd-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.392"
] |
translation | transformers |
### opus-mt-rnd-sv
* source languages: rnd
* target languages: sv
* OPUS readme: [rnd-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rnd-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rnd-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rnd.sv | 21.2 | 0.387 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rnd-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rnd",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rnd #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-rnd-sv
* source languages: rnd
* target languages: sv
* OPUS readme: rnd-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.2, chr-F: 0.387
| [
"### opus-mt-rnd-sv\n\n\n* source languages: rnd\n* target languages: sv\n* OPUS readme: rnd-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.2, chr-F: 0.387"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rnd #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-rnd-sv\n\n\n* source languages: rnd\n* target languages: sv\n* OPUS readme: rnd-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.2, chr-F: 0.387"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rnd #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-rnd-sv\n\n\n* source languages: rnd\n* target languages: sv\n* OPUS readme: rnd-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.2, chr-F: 0.387"
] |
translation | transformers |
### ron-epo
* source group: Romanian
* target group: Esperanto
* OPUS readme: [ron-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ron-epo/README.md)
* model: transformer-align
* source language(s): ron
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ron.epo | 27.8 | 0.495 |
### System Info:
- hf_name: ron-epo
- source_languages: ron
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ron-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ro', 'eo']
- src_constituents: {'ron'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ron-epo/opus-2020-06-16.test.txt
- src_alpha3: ron
- tgt_alpha3: epo
- short_pair: ro-eo
- chrF2_score: 0.495
- bleu: 27.8
- brevity_penalty: 0.955
- ref_len: 25751.0
- src_name: Romanian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: ro
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ron-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ro", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ro-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ro",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ro",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ro #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ron-epo
* source group: Romanian
* target group: Esperanto
* OPUS readme: ron-epo
* model: transformer-align
* source language(s): ron
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.8, chr-F: 0.495
### System Info:
* hf\_name: ron-epo
* source\_languages: ron
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ro', 'eo']
* src\_constituents: {'ron'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ron
* tgt\_alpha3: epo
* short\_pair: ro-eo
* chrF2\_score: 0.495
* bleu: 27.8
* brevity\_penalty: 0.955
* ref\_len: 25751.0
* src\_name: Romanian
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: ro
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: ron-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ron-epo\n\n\n* source group: Romanian\n* target group: Esperanto\n* OPUS readme: ron-epo\n* model: transformer-align\n* source language(s): ron\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.495",
"### System Info:\n\n\n* hf\\_name: ron-epo\n* source\\_languages: ron\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ro', 'eo']\n* src\\_constituents: {'ron'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ron\n* tgt\\_alpha3: epo\n* short\\_pair: ro-eo\n* chrF2\\_score: 0.495\n* bleu: 27.8\n* brevity\\_penalty: 0.955\n* ref\\_len: 25751.0\n* src\\_name: Romanian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: ro\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ron-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ro #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ron-epo\n\n\n* source group: Romanian\n* target group: Esperanto\n* OPUS readme: ron-epo\n* model: transformer-align\n* source language(s): ron\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.495",
"### System Info:\n\n\n* hf\\_name: ron-epo\n* source\\_languages: ron\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ro', 'eo']\n* src\\_constituents: {'ron'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ron\n* tgt\\_alpha3: epo\n* short\\_pair: ro-eo\n* chrF2\\_score: 0.495\n* bleu: 27.8\n* brevity\\_penalty: 0.955\n* ref\\_len: 25751.0\n* src\\_name: Romanian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: ro\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ron-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
135,
400
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ro #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ron-epo\n\n\n* source group: Romanian\n* target group: Esperanto\n* OPUS readme: ron-epo\n* model: transformer-align\n* source language(s): ron\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.495### System Info:\n\n\n* hf\\_name: ron-epo\n* source\\_languages: ron\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ro', 'eo']\n* src\\_constituents: {'ron'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ron\n* tgt\\_alpha3: epo\n* short\\_pair: ro-eo\n* chrF2\\_score: 0.495\n* bleu: 27.8\n* brevity\\_penalty: 0.955\n* ref\\_len: 25751.0\n* src\\_name: Romanian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: ro\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ron-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ro-fi
* source languages: ro
* target languages: fi
* OPUS readme: [ro-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ro.fi | 25.2 | 0.521 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ro-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ro",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ro #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ro-fi
* source languages: ro
* target languages: fi
* OPUS readme: ro-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.2, chr-F: 0.521
| [
"### opus-mt-ro-fi\n\n\n* source languages: ro\n* target languages: fi\n* OPUS readme: ro-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.521"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ro #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ro-fi\n\n\n* source languages: ro\n* target languages: fi\n* OPUS readme: ro-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.521"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ro #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ro-fi\n\n\n* source languages: ro\n* target languages: fi\n* OPUS readme: ro-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.521"
] |
translation | transformers |
### opus-mt-ro-fr
* source languages: ro
* target languages: fr
* OPUS readme: [ro-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ro.fr | 54.5 | 0.697 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ro-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ro",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ro #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ro-fr
* source languages: ro
* target languages: fr
* OPUS readme: ro-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 54.5, chr-F: 0.697
| [
"### opus-mt-ro-fr\n\n\n* source languages: ro\n* target languages: fr\n* OPUS readme: ro-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.5, chr-F: 0.697"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ro #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ro-fr\n\n\n* source languages: ro\n* target languages: fr\n* OPUS readme: ro-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.5, chr-F: 0.697"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ro #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ro-fr\n\n\n* source languages: ro\n* target languages: fr\n* OPUS readme: ro-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.5, chr-F: 0.697"
] |
translation | transformers |
### opus-mt-ro-sv
* source languages: ro
* target languages: sv
* OPUS readme: [ro-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ro-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ro-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ro.sv | 31.2 | 0.529 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ro-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ro",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ro #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ro-sv
* source languages: ro
* target languages: sv
* OPUS readme: ro-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.2, chr-F: 0.529
| [
"### opus-mt-ro-sv\n\n\n* source languages: ro\n* target languages: sv\n* OPUS readme: ro-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.2, chr-F: 0.529"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ro #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ro-sv\n\n\n* source languages: ro\n* target languages: sv\n* OPUS readme: ro-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.2, chr-F: 0.529"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ro #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ro-sv\n\n\n* source languages: ro\n* target languages: sv\n* OPUS readme: ro-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.2, chr-F: 0.529"
] |
translation | transformers |
### roa-eng
* source group: Romance languages
* target group: English
* OPUS readme: [roa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/roa-eng/README.md)
* model: transformer
* source language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-roneng.ron.eng | 37.1 | 0.631 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 31.6 | 0.564 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 36.1 | 0.592 |
| newssyscomb2009-fraeng.fra.eng | 29.3 | 0.563 |
| newssyscomb2009-itaeng.ita.eng | 33.1 | 0.589 |
| newssyscomb2009-spaeng.spa.eng | 29.2 | 0.562 |
| news-test2008-fraeng.fra.eng | 25.2 | 0.533 |
| news-test2008-spaeng.spa.eng | 26.6 | 0.542 |
| newstest2009-fraeng.fra.eng | 28.6 | 0.557 |
| newstest2009-itaeng.ita.eng | 32.0 | 0.580 |
| newstest2009-spaeng.spa.eng | 28.9 | 0.559 |
| newstest2010-fraeng.fra.eng | 29.9 | 0.573 |
| newstest2010-spaeng.spa.eng | 33.3 | 0.596 |
| newstest2011-fraeng.fra.eng | 31.2 | 0.585 |
| newstest2011-spaeng.spa.eng | 32.3 | 0.584 |
| newstest2012-fraeng.fra.eng | 31.3 | 0.580 |
| newstest2012-spaeng.spa.eng | 35.3 | 0.606 |
| newstest2013-fraeng.fra.eng | 31.9 | 0.575 |
| newstest2013-spaeng.spa.eng | 32.8 | 0.592 |
| newstest2014-fren-fraeng.fra.eng | 34.6 | 0.611 |
| newstest2016-enro-roneng.ron.eng | 35.8 | 0.614 |
| Tatoeba-test.arg-eng.arg.eng | 38.7 | 0.512 |
| Tatoeba-test.ast-eng.ast.eng | 35.2 | 0.520 |
| Tatoeba-test.cat-eng.cat.eng | 54.9 | 0.703 |
| Tatoeba-test.cos-eng.cos.eng | 68.1 | 0.666 |
| Tatoeba-test.egl-eng.egl.eng | 6.7 | 0.209 |
| Tatoeba-test.ext-eng.ext.eng | 24.2 | 0.427 |
| Tatoeba-test.fra-eng.fra.eng | 53.9 | 0.691 |
| Tatoeba-test.frm-eng.frm.eng | 25.7 | 0.423 |
| Tatoeba-test.gcf-eng.gcf.eng | 14.8 | 0.288 |
| Tatoeba-test.glg-eng.glg.eng | 54.6 | 0.703 |
| Tatoeba-test.hat-eng.hat.eng | 37.0 | 0.540 |
| Tatoeba-test.ita-eng.ita.eng | 64.8 | 0.768 |
| Tatoeba-test.lad-eng.lad.eng | 21.7 | 0.452 |
| Tatoeba-test.lij-eng.lij.eng | 11.2 | 0.299 |
| Tatoeba-test.lld-eng.lld.eng | 10.8 | 0.273 |
| Tatoeba-test.lmo-eng.lmo.eng | 5.8 | 0.260 |
| Tatoeba-test.mfe-eng.mfe.eng | 63.1 | 0.819 |
| Tatoeba-test.msa-eng.msa.eng | 40.9 | 0.592 |
| Tatoeba-test.multi.eng | 54.9 | 0.697 |
| Tatoeba-test.mwl-eng.mwl.eng | 44.6 | 0.674 |
| Tatoeba-test.oci-eng.oci.eng | 20.5 | 0.404 |
| Tatoeba-test.pap-eng.pap.eng | 56.2 | 0.669 |
| Tatoeba-test.pms-eng.pms.eng | 10.3 | 0.324 |
| Tatoeba-test.por-eng.por.eng | 59.7 | 0.738 |
| Tatoeba-test.roh-eng.roh.eng | 14.8 | 0.378 |
| Tatoeba-test.ron-eng.ron.eng | 55.2 | 0.703 |
| Tatoeba-test.scn-eng.scn.eng | 10.2 | 0.259 |
| Tatoeba-test.spa-eng.spa.eng | 56.2 | 0.714 |
| Tatoeba-test.vec-eng.vec.eng | 13.8 | 0.317 |
| Tatoeba-test.wln-eng.wln.eng | 17.3 | 0.323 |
### System Info:
- hf_name: roa-eng
- source_languages: roa
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/roa-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa', 'en']
- src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/roa-eng/opus2m-2020-08-01.test.txt
- src_alpha3: roa
- tgt_alpha3: eng
- short_pair: roa-en
- chrF2_score: 0.6970000000000001
- bleu: 54.9
- brevity_penalty: 0.9790000000000001
- ref_len: 74762.0
- src_name: Romance languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: roa
- tgt_alpha2: en
- prefer_old: False
- long_pair: roa-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["it", "ca", "rm", "es", "ro", "gl", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "roa", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-roa-en | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"roa",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"roa",
"en"
] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #it #ca #rm #es #ro #gl #co #wa #pt #oc #an #id #fr #ht #roa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### roa-eng
* source group: Romance languages
* target group: English
* OPUS readme: roa-eng
* model: transformer
* source language(s): arg ast cat cos egl ext fra frm\_Latn gcf\_Latn glg hat ind ita lad lad\_Latn lij lld\_Latn lmo max\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\_Latn vec wln zlm\_Latn zsm\_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.1, chr-F: 0.631
testset: URL, BLEU: 31.6, chr-F: 0.564
testset: URL, BLEU: 36.1, chr-F: 0.592
testset: URL, BLEU: 29.3, chr-F: 0.563
testset: URL, BLEU: 33.1, chr-F: 0.589
testset: URL, BLEU: 29.2, chr-F: 0.562
testset: URL, BLEU: 25.2, chr-F: 0.533
testset: URL, BLEU: 26.6, chr-F: 0.542
testset: URL, BLEU: 28.6, chr-F: 0.557
testset: URL, BLEU: 32.0, chr-F: 0.580
testset: URL, BLEU: 28.9, chr-F: 0.559
testset: URL, BLEU: 29.9, chr-F: 0.573
testset: URL, BLEU: 33.3, chr-F: 0.596
testset: URL, BLEU: 31.2, chr-F: 0.585
testset: URL, BLEU: 32.3, chr-F: 0.584
testset: URL, BLEU: 31.3, chr-F: 0.580
testset: URL, BLEU: 35.3, chr-F: 0.606
testset: URL, BLEU: 31.9, chr-F: 0.575
testset: URL, BLEU: 32.8, chr-F: 0.592
testset: URL, BLEU: 34.6, chr-F: 0.611
testset: URL, BLEU: 35.8, chr-F: 0.614
testset: URL, BLEU: 38.7, chr-F: 0.512
testset: URL, BLEU: 35.2, chr-F: 0.520
testset: URL, BLEU: 54.9, chr-F: 0.703
testset: URL, BLEU: 68.1, chr-F: 0.666
testset: URL, BLEU: 6.7, chr-F: 0.209
testset: URL, BLEU: 24.2, chr-F: 0.427
testset: URL, BLEU: 53.9, chr-F: 0.691
testset: URL, BLEU: 25.7, chr-F: 0.423
testset: URL, BLEU: 14.8, chr-F: 0.288
testset: URL, BLEU: 54.6, chr-F: 0.703
testset: URL, BLEU: 37.0, chr-F: 0.540
testset: URL, BLEU: 64.8, chr-F: 0.768
testset: URL, BLEU: 21.7, chr-F: 0.452
testset: URL, BLEU: 11.2, chr-F: 0.299
testset: URL, BLEU: 10.8, chr-F: 0.273
testset: URL, BLEU: 5.8, chr-F: 0.260
testset: URL, BLEU: 63.1, chr-F: 0.819
testset: URL, BLEU: 40.9, chr-F: 0.592
testset: URL, BLEU: 54.9, chr-F: 0.697
testset: URL, BLEU: 44.6, chr-F: 0.674
testset: URL, BLEU: 20.5, chr-F: 0.404
testset: URL, BLEU: 56.2, chr-F: 0.669
testset: URL, BLEU: 10.3, chr-F: 0.324
testset: URL, BLEU: 59.7, chr-F: 0.738
testset: URL, BLEU: 14.8, chr-F: 0.378
testset: URL, BLEU: 55.2, chr-F: 0.703
testset: URL, BLEU: 10.2, chr-F: 0.259
testset: URL, BLEU: 56.2, chr-F: 0.714
testset: URL, BLEU: 13.8, chr-F: 0.317
testset: URL, BLEU: 17.3, chr-F: 0.323
### System Info:
* hf\_name: roa-eng
* source\_languages: roa
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa', 'en']
* src\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad\_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\_Latn', 'gcf\_Latn', 'lld\_Latn', 'min', 'tmw\_Latn', 'cos', 'wln', 'zlm\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\_Latn', 'frm\_Latn', 'scn', 'mfe'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: roa
* tgt\_alpha3: eng
* short\_pair: roa-en
* chrF2\_score: 0.6970000000000001
* bleu: 54.9
* brevity\_penalty: 0.9790000000000001
* ref\_len: 74762.0
* src\_name: Romance languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: roa
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: roa-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### roa-eng\n\n\n* source group: Romance languages\n* target group: English\n* OPUS readme: roa-eng\n* model: transformer\n* source language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.1, chr-F: 0.631\ntestset: URL, BLEU: 31.6, chr-F: 0.564\ntestset: URL, BLEU: 36.1, chr-F: 0.592\ntestset: URL, BLEU: 29.3, chr-F: 0.563\ntestset: URL, BLEU: 33.1, chr-F: 0.589\ntestset: URL, BLEU: 29.2, chr-F: 0.562\ntestset: URL, BLEU: 25.2, chr-F: 0.533\ntestset: URL, BLEU: 26.6, chr-F: 0.542\ntestset: URL, BLEU: 28.6, chr-F: 0.557\ntestset: URL, BLEU: 32.0, chr-F: 0.580\ntestset: URL, BLEU: 28.9, chr-F: 0.559\ntestset: URL, BLEU: 29.9, chr-F: 0.573\ntestset: URL, BLEU: 33.3, chr-F: 0.596\ntestset: URL, BLEU: 31.2, chr-F: 0.585\ntestset: URL, BLEU: 32.3, chr-F: 0.584\ntestset: URL, BLEU: 31.3, chr-F: 0.580\ntestset: URL, BLEU: 35.3, chr-F: 0.606\ntestset: URL, BLEU: 31.9, chr-F: 0.575\ntestset: URL, BLEU: 32.8, chr-F: 0.592\ntestset: URL, BLEU: 34.6, chr-F: 0.611\ntestset: URL, BLEU: 35.8, chr-F: 0.614\ntestset: URL, BLEU: 38.7, chr-F: 0.512\ntestset: URL, BLEU: 35.2, chr-F: 0.520\ntestset: URL, BLEU: 54.9, chr-F: 0.703\ntestset: URL, BLEU: 68.1, chr-F: 0.666\ntestset: URL, BLEU: 6.7, chr-F: 0.209\ntestset: URL, BLEU: 24.2, chr-F: 0.427\ntestset: URL, BLEU: 53.9, chr-F: 0.691\ntestset: URL, BLEU: 25.7, chr-F: 0.423\ntestset: URL, BLEU: 14.8, chr-F: 0.288\ntestset: URL, BLEU: 54.6, chr-F: 0.703\ntestset: URL, BLEU: 37.0, chr-F: 0.540\ntestset: URL, BLEU: 64.8, chr-F: 0.768\ntestset: URL, BLEU: 21.7, chr-F: 0.452\ntestset: URL, BLEU: 11.2, chr-F: 0.299\ntestset: URL, BLEU: 10.8, chr-F: 0.273\ntestset: URL, BLEU: 5.8, chr-F: 0.260\ntestset: URL, BLEU: 63.1, chr-F: 0.819\ntestset: URL, BLEU: 40.9, chr-F: 0.592\ntestset: URL, BLEU: 54.9, chr-F: 0.697\ntestset: URL, BLEU: 44.6, chr-F: 0.674\ntestset: URL, BLEU: 20.5, chr-F: 0.404\ntestset: URL, BLEU: 56.2, chr-F: 0.669\ntestset: URL, BLEU: 10.3, chr-F: 0.324\ntestset: URL, BLEU: 59.7, chr-F: 0.738\ntestset: URL, BLEU: 14.8, chr-F: 0.378\ntestset: URL, BLEU: 55.2, chr-F: 0.703\ntestset: URL, BLEU: 10.2, chr-F: 0.259\ntestset: URL, BLEU: 56.2, chr-F: 0.714\ntestset: URL, BLEU: 13.8, chr-F: 0.317\ntestset: URL, BLEU: 17.3, chr-F: 0.323",
"### System Info:\n\n\n* hf\\_name: roa-eng\n* source\\_languages: roa\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa', 'en']\n* src\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad\\_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: roa\n* tgt\\_alpha3: eng\n* short\\_pair: roa-en\n* chrF2\\_score: 0.6970000000000001\n* bleu: 54.9\n* brevity\\_penalty: 0.9790000000000001\n* ref\\_len: 74762.0\n* src\\_name: Romance languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: roa\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: roa-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #it #ca #rm #es #ro #gl #co #wa #pt #oc #an #id #fr #ht #roa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### roa-eng\n\n\n* source group: Romance languages\n* target group: English\n* OPUS readme: roa-eng\n* model: transformer\n* source language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.1, chr-F: 0.631\ntestset: URL, BLEU: 31.6, chr-F: 0.564\ntestset: URL, BLEU: 36.1, chr-F: 0.592\ntestset: URL, BLEU: 29.3, chr-F: 0.563\ntestset: URL, BLEU: 33.1, chr-F: 0.589\ntestset: URL, BLEU: 29.2, chr-F: 0.562\ntestset: URL, BLEU: 25.2, chr-F: 0.533\ntestset: URL, BLEU: 26.6, chr-F: 0.542\ntestset: URL, BLEU: 28.6, chr-F: 0.557\ntestset: URL, BLEU: 32.0, chr-F: 0.580\ntestset: URL, BLEU: 28.9, chr-F: 0.559\ntestset: URL, BLEU: 29.9, chr-F: 0.573\ntestset: URL, BLEU: 33.3, chr-F: 0.596\ntestset: URL, BLEU: 31.2, chr-F: 0.585\ntestset: URL, BLEU: 32.3, chr-F: 0.584\ntestset: URL, BLEU: 31.3, chr-F: 0.580\ntestset: URL, BLEU: 35.3, chr-F: 0.606\ntestset: URL, BLEU: 31.9, chr-F: 0.575\ntestset: URL, BLEU: 32.8, chr-F: 0.592\ntestset: URL, BLEU: 34.6, chr-F: 0.611\ntestset: URL, BLEU: 35.8, chr-F: 0.614\ntestset: URL, BLEU: 38.7, chr-F: 0.512\ntestset: URL, BLEU: 35.2, chr-F: 0.520\ntestset: URL, BLEU: 54.9, chr-F: 0.703\ntestset: URL, BLEU: 68.1, chr-F: 0.666\ntestset: URL, BLEU: 6.7, chr-F: 0.209\ntestset: URL, BLEU: 24.2, chr-F: 0.427\ntestset: URL, BLEU: 53.9, chr-F: 0.691\ntestset: URL, BLEU: 25.7, chr-F: 0.423\ntestset: URL, BLEU: 14.8, chr-F: 0.288\ntestset: URL, BLEU: 54.6, chr-F: 0.703\ntestset: URL, BLEU: 37.0, chr-F: 0.540\ntestset: URL, BLEU: 64.8, chr-F: 0.768\ntestset: URL, BLEU: 21.7, chr-F: 0.452\ntestset: URL, BLEU: 11.2, chr-F: 0.299\ntestset: URL, BLEU: 10.8, chr-F: 0.273\ntestset: URL, BLEU: 5.8, chr-F: 0.260\ntestset: URL, BLEU: 63.1, chr-F: 0.819\ntestset: URL, BLEU: 40.9, chr-F: 0.592\ntestset: URL, BLEU: 54.9, chr-F: 0.697\ntestset: URL, BLEU: 44.6, chr-F: 0.674\ntestset: URL, BLEU: 20.5, chr-F: 0.404\ntestset: URL, BLEU: 56.2, chr-F: 0.669\ntestset: URL, BLEU: 10.3, chr-F: 0.324\ntestset: URL, BLEU: 59.7, chr-F: 0.738\ntestset: URL, BLEU: 14.8, chr-F: 0.378\ntestset: URL, BLEU: 55.2, chr-F: 0.703\ntestset: URL, BLEU: 10.2, chr-F: 0.259\ntestset: URL, BLEU: 56.2, chr-F: 0.714\ntestset: URL, BLEU: 13.8, chr-F: 0.317\ntestset: URL, BLEU: 17.3, chr-F: 0.323",
"### System Info:\n\n\n* hf\\_name: roa-eng\n* source\\_languages: roa\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa', 'en']\n* src\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad\\_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: roa\n* tgt\\_alpha3: eng\n* short\\_pair: roa-en\n* chrF2\\_score: 0.6970000000000001\n* bleu: 54.9\n* brevity\\_penalty: 0.9790000000000001\n* ref\\_len: 74762.0\n* src\\_name: Romance languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: roa\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: roa-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
85,
1366,
674
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #it #ca #rm #es #ro #gl #co #wa #pt #oc #an #id #fr #ht #roa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### roa-eng\n\n\n* source group: Romance languages\n* target group: English\n* OPUS readme: roa-eng\n* model: transformer\n* source language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.1, chr-F: 0.631\ntestset: URL, BLEU: 31.6, chr-F: 0.564\ntestset: URL, BLEU: 36.1, chr-F: 0.592\ntestset: URL, BLEU: 29.3, chr-F: 0.563\ntestset: URL, BLEU: 33.1, chr-F: 0.589\ntestset: URL, BLEU: 29.2, chr-F: 0.562\ntestset: URL, BLEU: 25.2, chr-F: 0.533\ntestset: URL, BLEU: 26.6, chr-F: 0.542\ntestset: URL, BLEU: 28.6, chr-F: 0.557\ntestset: URL, BLEU: 32.0, chr-F: 0.580\ntestset: URL, BLEU: 28.9, chr-F: 0.559\ntestset: URL, BLEU: 29.9, chr-F: 0.573\ntestset: URL, BLEU: 33.3, chr-F: 0.596\ntestset: URL, BLEU: 31.2, chr-F: 0.585\ntestset: URL, BLEU: 32.3, chr-F: 0.584\ntestset: URL, BLEU: 31.3, chr-F: 0.580\ntestset: URL, BLEU: 35.3, chr-F: 0.606\ntestset: URL, BLEU: 31.9, chr-F: 0.575\ntestset: URL, BLEU: 32.8, chr-F: 0.592\ntestset: URL, BLEU: 34.6, chr-F: 0.611\ntestset: URL, BLEU: 35.8, chr-F: 0.614\ntestset: URL, BLEU: 38.7, chr-F: 0.512\ntestset: URL, BLEU: 35.2, chr-F: 0.520\ntestset: URL, BLEU: 54.9, chr-F: 0.703\ntestset: URL, BLEU: 68.1, chr-F: 0.666\ntestset: URL, BLEU: 6.7, chr-F: 0.209\ntestset: URL, BLEU: 24.2, chr-F: 0.427\ntestset: URL, BLEU: 53.9, chr-F: 0.691\ntestset: URL, BLEU: 25.7, chr-F: 0.423\ntestset: URL, BLEU: 14.8, chr-F: 0.288\ntestset: URL, BLEU: 54.6, chr-F: 0.703\ntestset: URL, BLEU: 37.0, chr-F: 0.540\ntestset: URL, BLEU: 64.8, chr-F: 0.768\ntestset: URL, BLEU: 21.7, chr-F: 0.452\ntestset: URL, BLEU: 11.2, chr-F: 0.299\ntestset: URL, BLEU: 10.8, chr-F: 0.273\ntestset: URL, BLEU: 5.8, chr-F: 0.260\ntestset: URL, BLEU: 63.1, chr-F: 0.819\ntestset: URL, BLEU: 40.9, chr-F: 0.592\ntestset: URL, BLEU: 54.9, chr-F: 0.697\ntestset: URL, BLEU: 44.6, chr-F: 0.674\ntestset: URL, BLEU: 20.5, chr-F: 0.404\ntestset: URL, BLEU: 56.2, chr-F: 0.669\ntestset: URL, BLEU: 10.3, chr-F: 0.324\ntestset: URL, BLEU: 59.7, chr-F: 0.738\ntestset: URL, BLEU: 14.8, chr-F: 0.378\ntestset: URL, BLEU: 55.2, chr-F: 0.703\ntestset: URL, BLEU: 10.2, chr-F: 0.259\ntestset: URL, BLEU: 56.2, chr-F: 0.714\ntestset: URL, BLEU: 13.8, chr-F: 0.317\ntestset: URL, BLEU: 17.3, chr-F: 0.323### System Info:\n\n\n* hf\\_name: roa-eng\n* source\\_languages: roa\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa', 'en']\n* src\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad\\_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: roa\n* tgt\\_alpha3: eng\n* short\\_pair: roa-en\n* chrF2\\_score: 0.6970000000000001\n* bleu: 54.9\n* brevity\\_penalty: 0.9790000000000001\n* ref\\_len: 74762.0\n* src\\_name: Romance languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: roa\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: roa-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-afr
* source group: Russian
* target group: Afrikaans
* OPUS readme: [rus-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-afr/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.afr | 48.1 | 0.669 |
### System Info:
- hf_name: rus-afr
- source_languages: rus
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'af']
- src_constituents: {'rus'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-afr/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: afr
- short_pair: ru-af
- chrF2_score: 0.669
- bleu: 48.1
- brevity_penalty: 1.0
- ref_len: 1390.0
- src_name: Russian
- tgt_name: Afrikaans
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: af
- prefer_old: False
- long_pair: rus-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "af"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-af | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"af",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"af"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-afr
* source group: Russian
* target group: Afrikaans
* OPUS readme: rus-afr
* model: transformer-align
* source language(s): rus
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 48.1, chr-F: 0.669
### System Info:
* hf\_name: rus-afr
* source\_languages: rus
* target\_languages: afr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'af']
* src\_constituents: {'rus'}
* tgt\_constituents: {'afr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: afr
* short\_pair: ru-af
* chrF2\_score: 0.669
* bleu: 48.1
* brevity\_penalty: 1.0
* ref\_len: 1390.0
* src\_name: Russian
* tgt\_name: Afrikaans
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: af
* prefer\_old: False
* long\_pair: rus-afr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-afr\n\n\n* source group: Russian\n* target group: Afrikaans\n* OPUS readme: rus-afr\n* model: transformer-align\n* source language(s): rus\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.1, chr-F: 0.669",
"### System Info:\n\n\n* hf\\_name: rus-afr\n* source\\_languages: rus\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'af']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: afr\n* short\\_pair: ru-af\n* chrF2\\_score: 0.669\n* bleu: 48.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 1390.0\n* src\\_name: Russian\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: rus-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-afr\n\n\n* source group: Russian\n* target group: Afrikaans\n* OPUS readme: rus-afr\n* model: transformer-align\n* source language(s): rus\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.1, chr-F: 0.669",
"### System Info:\n\n\n* hf\\_name: rus-afr\n* source\\_languages: rus\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'af']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: afr\n* short\\_pair: ru-af\n* chrF2\\_score: 0.669\n* bleu: 48.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 1390.0\n* src\\_name: Russian\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: rus-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
395
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-afr\n\n\n* source group: Russian\n* target group: Afrikaans\n* OPUS readme: rus-afr\n* model: transformer-align\n* source language(s): rus\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.1, chr-F: 0.669### System Info:\n\n\n* hf\\_name: rus-afr\n* source\\_languages: rus\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'af']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: afr\n* short\\_pair: ru-af\n* chrF2\\_score: 0.669\n* bleu: 48.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 1390.0\n* src\\_name: Russian\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: rus-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-ara
* source group: Russian
* target group: Arabic
* OPUS readme: [rus-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md)
* model: transformer
* source language(s): rus
* target language(s): apc ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.ara | 16.6 | 0.486 |
### System Info:
- hf_name: rus-ara
- source_languages: rus
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'ar']
- src_constituents: {'rus'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt
- src_alpha3: rus
- tgt_alpha3: ara
- short_pair: ru-ar
- chrF2_score: 0.486
- bleu: 16.6
- brevity_penalty: 0.9690000000000001
- ref_len: 18878.0
- src_name: Russian
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: ru
- tgt_alpha2: ar
- prefer_old: False
- long_pair: rus-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "ar"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-ar | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"ar"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-ara
* source group: Russian
* target group: Arabic
* OPUS readme: rus-ara
* model: transformer
* source language(s): rus
* target language(s): apc ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 16.6, chr-F: 0.486
### System Info:
* hf\_name: rus-ara
* source\_languages: rus
* target\_languages: ara
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'ar']
* src\_constituents: {'rus'}
* tgt\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: ara
* short\_pair: ru-ar
* chrF2\_score: 0.486
* bleu: 16.6
* brevity\_penalty: 0.9690000000000001
* ref\_len: 18878.0
* src\_name: Russian
* tgt\_name: Arabic
* train\_date: 2020-07-03
* src\_alpha2: ru
* tgt\_alpha2: ar
* prefer\_old: False
* long\_pair: rus-ara
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-ara\n\n\n* source group: Russian\n* target group: Arabic\n* OPUS readme: rus-ara\n* model: transformer\n* source language(s): rus\n* target language(s): apc ara arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.6, chr-F: 0.486",
"### System Info:\n\n\n* hf\\_name: rus-ara\n* source\\_languages: rus\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'ar']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: ara\n* short\\_pair: ru-ar\n* chrF2\\_score: 0.486\n* bleu: 16.6\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 18878.0\n* src\\_name: Russian\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: ru\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: rus-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-ara\n\n\n* source group: Russian\n* target group: Arabic\n* OPUS readme: rus-ara\n* model: transformer\n* source language(s): rus\n* target language(s): apc ara arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.6, chr-F: 0.486",
"### System Info:\n\n\n* hf\\_name: rus-ara\n* source\\_languages: rus\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'ar']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: ara\n* short\\_pair: ru-ar\n* chrF2\\_score: 0.486\n* bleu: 16.6\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 18878.0\n* src\\_name: Russian\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: ru\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: rus-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
158,
445
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-ara\n\n\n* source group: Russian\n* target group: Arabic\n* OPUS readme: rus-ara\n* model: transformer\n* source language(s): rus\n* target language(s): apc ara arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.6, chr-F: 0.486### System Info:\n\n\n* hf\\_name: rus-ara\n* source\\_languages: rus\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'ar']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: ara\n* short\\_pair: ru-ar\n* chrF2\\_score: 0.486\n* bleu: 16.6\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 18878.0\n* src\\_name: Russian\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: ru\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: rus-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-bul
* source group: Russian
* target group: Bulgarian
* OPUS readme: [rus-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-bul/README.md)
* model: transformer
* source language(s): rus
* target language(s): bul bul_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.bul | 52.3 | 0.704 |
### System Info:
- hf_name: rus-bul
- source_languages: rus
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'bg']
- src_constituents: {'rus'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-bul/opus-2020-07-03.test.txt
- src_alpha3: rus
- tgt_alpha3: bul
- short_pair: ru-bg
- chrF2_score: 0.7040000000000001
- bleu: 52.3
- brevity_penalty: 0.919
- ref_len: 8272.0
- src_name: Russian
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: ru
- tgt_alpha2: bg
- prefer_old: False
- long_pair: rus-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "bg"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-bg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"bg"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-bul
* source group: Russian
* target group: Bulgarian
* OPUS readme: rus-bul
* model: transformer
* source language(s): rus
* target language(s): bul bul\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 52.3, chr-F: 0.704
### System Info:
* hf\_name: rus-bul
* source\_languages: rus
* target\_languages: bul
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'bg']
* src\_constituents: {'rus'}
* tgt\_constituents: {'bul', 'bul\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: bul
* short\_pair: ru-bg
* chrF2\_score: 0.7040000000000001
* bleu: 52.3
* brevity\_penalty: 0.919
* ref\_len: 8272.0
* src\_name: Russian
* tgt\_name: Bulgarian
* train\_date: 2020-07-03
* src\_alpha2: ru
* tgt\_alpha2: bg
* prefer\_old: False
* long\_pair: rus-bul
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-bul\n\n\n* source group: Russian\n* target group: Bulgarian\n* OPUS readme: rus-bul\n* model: transformer\n* source language(s): rus\n* target language(s): bul bul\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.3, chr-F: 0.704",
"### System Info:\n\n\n* hf\\_name: rus-bul\n* source\\_languages: rus\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'bg']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: bul\n* short\\_pair: ru-bg\n* chrF2\\_score: 0.7040000000000001\n* bleu: 52.3\n* brevity\\_penalty: 0.919\n* ref\\_len: 8272.0\n* src\\_name: Russian\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: ru\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: rus-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-bul\n\n\n* source group: Russian\n* target group: Bulgarian\n* OPUS readme: rus-bul\n* model: transformer\n* source language(s): rus\n* target language(s): bul bul\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.3, chr-F: 0.704",
"### System Info:\n\n\n* hf\\_name: rus-bul\n* source\\_languages: rus\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'bg']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: bul\n* short\\_pair: ru-bg\n* chrF2\\_score: 0.7040000000000001\n* bleu: 52.3\n* brevity\\_penalty: 0.919\n* ref\\_len: 8272.0\n* src\\_name: Russian\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: ru\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: rus-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
164,
416
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-bul\n\n\n* source group: Russian\n* target group: Bulgarian\n* OPUS readme: rus-bul\n* model: transformer\n* source language(s): rus\n* target language(s): bul bul\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.3, chr-F: 0.704### System Info:\n\n\n* hf\\_name: rus-bul\n* source\\_languages: rus\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'bg']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: bul\n* short\\_pair: ru-bg\n* chrF2\\_score: 0.7040000000000001\n* bleu: 52.3\n* brevity\\_penalty: 0.919\n* ref\\_len: 8272.0\n* src\\_name: Russian\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: ru\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: rus-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-dan
* source group: Russian
* target group: Danish
* OPUS readme: [rus-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.dan | 56.6 | 0.714 |
### System Info:
- hf_name: rus-dan
- source_languages: rus
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'da']
- src_constituents: {'rus'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: dan
- short_pair: ru-da
- chrF2_score: 0.7140000000000001
- bleu: 56.6
- brevity_penalty: 0.977
- ref_len: 11746.0
- src_name: Russian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: da
- prefer_old: False
- long_pair: rus-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "da"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-da | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"da"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-dan
* source group: Russian
* target group: Danish
* OPUS readme: rus-dan
* model: transformer-align
* source language(s): rus
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 56.6, chr-F: 0.714
### System Info:
* hf\_name: rus-dan
* source\_languages: rus
* target\_languages: dan
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'da']
* src\_constituents: {'rus'}
* tgt\_constituents: {'dan'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: dan
* short\_pair: ru-da
* chrF2\_score: 0.7140000000000001
* bleu: 56.6
* brevity\_penalty: 0.977
* ref\_len: 11746.0
* src\_name: Russian
* tgt\_name: Danish
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: da
* prefer\_old: False
* long\_pair: rus-dan
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-dan\n\n\n* source group: Russian\n* target group: Danish\n* OPUS readme: rus-dan\n* model: transformer-align\n* source language(s): rus\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.6, chr-F: 0.714",
"### System Info:\n\n\n* hf\\_name: rus-dan\n* source\\_languages: rus\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'da']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: dan\n* short\\_pair: ru-da\n* chrF2\\_score: 0.7140000000000001\n* bleu: 56.6\n* brevity\\_penalty: 0.977\n* ref\\_len: 11746.0\n* src\\_name: Russian\n* tgt\\_name: Danish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: rus-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-dan\n\n\n* source group: Russian\n* target group: Danish\n* OPUS readme: rus-dan\n* model: transformer-align\n* source language(s): rus\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.6, chr-F: 0.714",
"### System Info:\n\n\n* hf\\_name: rus-dan\n* source\\_languages: rus\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'da']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: dan\n* short\\_pair: ru-da\n* chrF2\\_score: 0.7140000000000001\n* bleu: 56.6\n* brevity\\_penalty: 0.977\n* ref\\_len: 11746.0\n* src\\_name: Russian\n* tgt\\_name: Danish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: rus-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
131,
397
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-dan\n\n\n* source group: Russian\n* target group: Danish\n* OPUS readme: rus-dan\n* model: transformer-align\n* source language(s): rus\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.6, chr-F: 0.714### System Info:\n\n\n* hf\\_name: rus-dan\n* source\\_languages: rus\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'da']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: dan\n* short\\_pair: ru-da\n* chrF2\\_score: 0.7140000000000001\n* bleu: 56.6\n* brevity\\_penalty: 0.977\n* ref\\_len: 11746.0\n* src\\_name: Russian\n* tgt\\_name: Danish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: rus-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ru-en
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Transformer-align
- **Language(s):**
- Source Language: Russian
- Target Language: English
- **License:** CC-BY-4.0
- **Resources for more information:**
- [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Uses
#### Direct Use
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Further details about the dataset for this model can be found in the OPUS readme: [ru-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-en/README.md)
## Training
#### Training Data
##### Preprocessing
* Pre-processing: Normalization + SentencePiece
* Dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT)
* Download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.zip)
* Test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.test.txt)
## Evaluation
#### Results
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-en/opus-2020-02-26.eval.txt)
#### Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.ru.en | 34.8 | 0.603 |
| newstest2013.ru.en | 27.9 | 0.545 |
| newstest2014-ruen.ru.en | 31.9 | 0.591 |
| newstest2015-enru.ru.en | 30.4 | 0.568 |
| newstest2016-enru.ru.en | 30.1 | 0.565 |
| newstest2017-enru.ru.en | 33.4 | 0.593 |
| newstest2018-enru.ru.en | 29.6 | 0.565 |
| newstest2019-ruen.ru.en | 31.4 | 0.576 |
| Tatoeba.ru.en | 61.1 | 0.736 |
## Citation Information
```bibtex
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-ru-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-ru-en")
```
| {"license": "cc-by-4.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-en | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"ru",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #ru #en #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ru-en
Table of Contents
-----------------
* Model Details
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Citation Information
* How to Get Started With the Model
Model Details
-------------
Model Description:
* Developed by: Language Technology Research Group at the University of Helsinki
* Model Type: Transformer-align
* Language(s):
+ Source Language: Russian
+ Target Language: English
* License: CC-BY-4.0
* Resources for more information:
+ GitHub Repo
Uses
----
#### Direct Use
This model can be used for translation and text-to-text generation.
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Further details about the dataset for this model can be found in the OPUS readme: ru-en
Training
--------
#### Training Data
##### Preprocessing
* Pre-processing: Normalization + SentencePiece
* Dataset: opus
* Download original weights: URL
* Test set translations: URL
Evaluation
----------
#### Results
* test set scores: URL
#### Benchmarks
testset: URL, BLEU: 34.8, chr-F: 0.603
testset: URL, BLEU: 27.9, chr-F: 0.545
testset: URL, BLEU: 31.9, chr-F: 0.591
testset: URL, BLEU: 30.4, chr-F: 0.568
testset: URL, BLEU: 30.1, chr-F: 0.565
testset: URL, BLEU: 33.4, chr-F: 0.593
testset: URL, BLEU: 29.6, chr-F: 0.565
testset: URL, BLEU: 31.4, chr-F: 0.576
testset: URL, BLEU: 61.1, chr-F: 0.736
How to Get Started With the Model
---------------------------------
| [
"### opus-mt-ru-en\n\n\nTable of Contents\n-----------------\n\n\n* Model Details\n* Uses\n* Risks, Limitations and Biases\n* Training\n* Evaluation\n* Citation Information\n* How to Get Started With the Model\n\n\nModel Details\n-------------\n\n\nModel Description:\n\n\n* Developed by: Language Technology Research Group at the University of Helsinki\n* Model Type: Transformer-align\n* Language(s):\n\t+ Source Language: Russian\n\t+ Target Language: English\n* License: CC-BY-4.0\n* Resources for more information:\n\t+ GitHub Repo\n\n\nUses\n----",
"#### Direct Use\n\n\nThis model can be used for translation and text-to-text generation.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nFurther details about the dataset for this model can be found in the OPUS readme: ru-en\n\n\nTraining\n--------",
"#### Training Data",
"##### Preprocessing\n\n\n* Pre-processing: Normalization + SentencePiece\n* Dataset: opus\n* Download original weights: URL\n* Test set translations: URL\n\n\nEvaluation\n----------",
"#### Results\n\n\n* test set scores: URL",
"#### Benchmarks\n\n\ntestset: URL, BLEU: 34.8, chr-F: 0.603\ntestset: URL, BLEU: 27.9, chr-F: 0.545\ntestset: URL, BLEU: 31.9, chr-F: 0.591\ntestset: URL, BLEU: 30.4, chr-F: 0.568\ntestset: URL, BLEU: 30.1, chr-F: 0.565\ntestset: URL, BLEU: 33.4, chr-F: 0.593\ntestset: URL, BLEU: 29.6, chr-F: 0.565\ntestset: URL, BLEU: 31.4, chr-F: 0.576\ntestset: URL, BLEU: 61.1, chr-F: 0.736\n\n\nHow to Get Started With the Model\n---------------------------------"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #ru #en #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ru-en\n\n\nTable of Contents\n-----------------\n\n\n* Model Details\n* Uses\n* Risks, Limitations and Biases\n* Training\n* Evaluation\n* Citation Information\n* How to Get Started With the Model\n\n\nModel Details\n-------------\n\n\nModel Description:\n\n\n* Developed by: Language Technology Research Group at the University of Helsinki\n* Model Type: Transformer-align\n* Language(s):\n\t+ Source Language: Russian\n\t+ Target Language: English\n* License: CC-BY-4.0\n* Resources for more information:\n\t+ GitHub Repo\n\n\nUses\n----",
"#### Direct Use\n\n\nThis model can be used for translation and text-to-text generation.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nFurther details about the dataset for this model can be found in the OPUS readme: ru-en\n\n\nTraining\n--------",
"#### Training Data",
"##### Preprocessing\n\n\n* Pre-processing: Normalization + SentencePiece\n* Dataset: opus\n* Download original weights: URL\n* Test set translations: URL\n\n\nEvaluation\n----------",
"#### Results\n\n\n* test set scores: URL",
"#### Benchmarks\n\n\ntestset: URL, BLEU: 34.8, chr-F: 0.603\ntestset: URL, BLEU: 27.9, chr-F: 0.545\ntestset: URL, BLEU: 31.9, chr-F: 0.591\ntestset: URL, BLEU: 30.4, chr-F: 0.568\ntestset: URL, BLEU: 30.1, chr-F: 0.565\ntestset: URL, BLEU: 33.4, chr-F: 0.593\ntestset: URL, BLEU: 29.6, chr-F: 0.565\ntestset: URL, BLEU: 31.4, chr-F: 0.576\ntestset: URL, BLEU: 61.1, chr-F: 0.736\n\n\nHow to Get Started With the Model\n---------------------------------"
] | [
55,
139,
149,
6,
49,
12,
253
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #ru #en #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ru-en\n\n\nTable of Contents\n-----------------\n\n\n* Model Details\n* Uses\n* Risks, Limitations and Biases\n* Training\n* Evaluation\n* Citation Information\n* How to Get Started With the Model\n\n\nModel Details\n-------------\n\n\nModel Description:\n\n\n* Developed by: Language Technology Research Group at the University of Helsinki\n* Model Type: Transformer-align\n* Language(s):\n\t+ Source Language: Russian\n\t+ Target Language: English\n* License: CC-BY-4.0\n* Resources for more information:\n\t+ GitHub Repo\n\n\nUses\n----#### Direct Use\n\n\nThis model can be used for translation and text-to-text generation.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nFurther details about the dataset for this model can be found in the OPUS readme: ru-en\n\n\nTraining\n--------#### Training Data##### Preprocessing\n\n\n* Pre-processing: Normalization + SentencePiece\n* Dataset: opus\n* Download original weights: URL\n* Test set translations: URL\n\n\nEvaluation\n----------#### Results\n\n\n* test set scores: URL#### Benchmarks\n\n\ntestset: URL, BLEU: 34.8, chr-F: 0.603\ntestset: URL, BLEU: 27.9, chr-F: 0.545\ntestset: URL, BLEU: 31.9, chr-F: 0.591\ntestset: URL, BLEU: 30.4, chr-F: 0.568\ntestset: URL, BLEU: 30.1, chr-F: 0.565\ntestset: URL, BLEU: 33.4, chr-F: 0.593\ntestset: URL, BLEU: 29.6, chr-F: 0.565\ntestset: URL, BLEU: 31.4, chr-F: 0.576\ntestset: URL, BLEU: 61.1, chr-F: 0.736\n\n\nHow to Get Started With the Model\n---------------------------------"
] |
translation | transformers |
### rus-epo
* source group: Russian
* target group: Esperanto
* OPUS readme: [rus-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-epo/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.epo | 24.2 | 0.436 |
### System Info:
- hf_name: rus-epo
- source_languages: rus
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'eo']
- src_constituents: {'rus'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-epo/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: epo
- short_pair: ru-eo
- chrF2_score: 0.436
- bleu: 24.2
- brevity_penalty: 0.925
- ref_len: 77197.0
- src_name: Russian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: eo
- prefer_old: False
- long_pair: rus-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-epo
* source group: Russian
* target group: Esperanto
* OPUS readme: rus-epo
* model: transformer-align
* source language(s): rus
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.2, chr-F: 0.436
### System Info:
* hf\_name: rus-epo
* source\_languages: rus
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'eo']
* src\_constituents: {'rus'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: epo
* short\_pair: ru-eo
* chrF2\_score: 0.436
* bleu: 24.2
* brevity\_penalty: 0.925
* ref\_len: 77197.0
* src\_name: Russian
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: ru
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: rus-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-epo\n\n\n* source group: Russian\n* target group: Esperanto\n* OPUS readme: rus-epo\n* model: transformer-align\n* source language(s): rus\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.436",
"### System Info:\n\n\n* hf\\_name: rus-epo\n* source\\_languages: rus\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'eo']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: epo\n* short\\_pair: ru-eo\n* chrF2\\_score: 0.436\n* bleu: 24.2\n* brevity\\_penalty: 0.925\n* ref\\_len: 77197.0\n* src\\_name: Russian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: ru\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: rus-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-epo\n\n\n* source group: Russian\n* target group: Esperanto\n* OPUS readme: rus-epo\n* model: transformer-align\n* source language(s): rus\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.436",
"### System Info:\n\n\n* hf\\_name: rus-epo\n* source\\_languages: rus\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'eo']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: epo\n* short\\_pair: ru-eo\n* chrF2\\_score: 0.436\n* bleu: 24.2\n* brevity\\_penalty: 0.925\n* ref\\_len: 77197.0\n* src\\_name: Russian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: ru\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: rus-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
136,
402
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-epo\n\n\n* source group: Russian\n* target group: Esperanto\n* OPUS readme: rus-epo\n* model: transformer-align\n* source language(s): rus\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.436### System Info:\n\n\n* hf\\_name: rus-epo\n* source\\_languages: rus\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'eo']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: epo\n* short\\_pair: ru-eo\n* chrF2\\_score: 0.436\n* bleu: 24.2\n* brevity\\_penalty: 0.925\n* ref\\_len: 77197.0\n* src\\_name: Russian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: ru\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: rus-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ru-es
* source languages: ru
* target languages: es
* OPUS readme: [ru-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.ru.es | 26.1 | 0.527 |
| newstest2013.ru.es | 28.2 | 0.538 |
| Tatoeba.ru.es | 49.4 | 0.675 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ru-es
* source languages: ru
* target languages: es
* OPUS readme: ru-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.1, chr-F: 0.527
testset: URL, BLEU: 28.2, chr-F: 0.538
testset: URL, BLEU: 49.4, chr-F: 0.675
| [
"### opus-mt-ru-es\n\n\n* source languages: ru\n* target languages: es\n* OPUS readme: ru-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.527\ntestset: URL, BLEU: 28.2, chr-F: 0.538\ntestset: URL, BLEU: 49.4, chr-F: 0.675"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ru-es\n\n\n* source languages: ru\n* target languages: es\n* OPUS readme: ru-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.527\ntestset: URL, BLEU: 28.2, chr-F: 0.538\ntestset: URL, BLEU: 49.4, chr-F: 0.675"
] | [
51,
152
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ru-es\n\n\n* source languages: ru\n* target languages: es\n* OPUS readme: ru-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.527\ntestset: URL, BLEU: 28.2, chr-F: 0.538\ntestset: URL, BLEU: 49.4, chr-F: 0.675"
] |
translation | transformers |
### rus-est
* source group: Russian
* target group: Estonian
* OPUS readme: [rus-est](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-est/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): est
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.est | 57.5 | 0.749 |
### System Info:
- hf_name: rus-est
- source_languages: rus
- target_languages: est
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-est/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'et']
- src_constituents: {'rus'}
- tgt_constituents: {'est'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-est/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: est
- short_pair: ru-et
- chrF2_score: 0.7490000000000001
- bleu: 57.5
- brevity_penalty: 0.975
- ref_len: 3572.0
- src_name: Russian
- tgt_name: Estonian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: et
- prefer_old: False
- long_pair: rus-est
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "et"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-et | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"et",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"et"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #et #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-est
* source group: Russian
* target group: Estonian
* OPUS readme: rus-est
* model: transformer-align
* source language(s): rus
* target language(s): est
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 57.5, chr-F: 0.749
### System Info:
* hf\_name: rus-est
* source\_languages: rus
* target\_languages: est
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'et']
* src\_constituents: {'rus'}
* tgt\_constituents: {'est'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: est
* short\_pair: ru-et
* chrF2\_score: 0.7490000000000001
* bleu: 57.5
* brevity\_penalty: 0.975
* ref\_len: 3572.0
* src\_name: Russian
* tgt\_name: Estonian
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: et
* prefer\_old: False
* long\_pair: rus-est
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-est\n\n\n* source group: Russian\n* target group: Estonian\n* OPUS readme: rus-est\n* model: transformer-align\n* source language(s): rus\n* target language(s): est\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.5, chr-F: 0.749",
"### System Info:\n\n\n* hf\\_name: rus-est\n* source\\_languages: rus\n* target\\_languages: est\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'et']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'est'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: est\n* short\\_pair: ru-et\n* chrF2\\_score: 0.7490000000000001\n* bleu: 57.5\n* brevity\\_penalty: 0.975\n* ref\\_len: 3572.0\n* src\\_name: Russian\n* tgt\\_name: Estonian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: et\n* prefer\\_old: False\n* long\\_pair: rus-est\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #et #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-est\n\n\n* source group: Russian\n* target group: Estonian\n* OPUS readme: rus-est\n* model: transformer-align\n* source language(s): rus\n* target language(s): est\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.5, chr-F: 0.749",
"### System Info:\n\n\n* hf\\_name: rus-est\n* source\\_languages: rus\n* target\\_languages: est\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'et']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'est'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: est\n* short\\_pair: ru-et\n* chrF2\\_score: 0.7490000000000001\n* bleu: 57.5\n* brevity\\_penalty: 0.975\n* ref\\_len: 3572.0\n* src\\_name: Russian\n* tgt\\_name: Estonian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: et\n* prefer\\_old: False\n* long\\_pair: rus-est\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
131,
397
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #et #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-est\n\n\n* source group: Russian\n* target group: Estonian\n* OPUS readme: rus-est\n* model: transformer-align\n* source language(s): rus\n* target language(s): est\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.5, chr-F: 0.749### System Info:\n\n\n* hf\\_name: rus-est\n* source\\_languages: rus\n* target\\_languages: est\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'et']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'est'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: est\n* short\\_pair: ru-et\n* chrF2\\_score: 0.7490000000000001\n* bleu: 57.5\n* brevity\\_penalty: 0.975\n* ref\\_len: 3572.0\n* src\\_name: Russian\n* tgt\\_name: Estonian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: et\n* prefer\\_old: False\n* long\\_pair: rus-est\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-eus
* source group: Russian
* target group: Basque
* OPUS readme: [rus-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-eus/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.eus | 29.7 | 0.539 |
### System Info:
- hf_name: rus-eus
- source_languages: rus
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'eu']
- src_constituents: {'rus'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-eus/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: eus
- short_pair: ru-eu
- chrF2_score: 0.539
- bleu: 29.7
- brevity_penalty: 0.9440000000000001
- ref_len: 2373.0
- src_name: Russian
- tgt_name: Basque
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: eu
- prefer_old: False
- long_pair: rus-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "eu"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-eu | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"ru",
"eu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"eu"
] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #ru #eu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-eus
* source group: Russian
* target group: Basque
* OPUS readme: rus-eus
* model: transformer-align
* source language(s): rus
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.7, chr-F: 0.539
### System Info:
* hf\_name: rus-eus
* source\_languages: rus
* target\_languages: eus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'eu']
* src\_constituents: {'rus'}
* tgt\_constituents: {'eus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: eus
* short\_pair: ru-eu
* chrF2\_score: 0.539
* bleu: 29.7
* brevity\_penalty: 0.9440000000000001
* ref\_len: 2373.0
* src\_name: Russian
* tgt\_name: Basque
* train\_date: 2020-06-16
* src\_alpha2: ru
* tgt\_alpha2: eu
* prefer\_old: False
* long\_pair: rus-eus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-eus\n\n\n* source group: Russian\n* target group: Basque\n* OPUS readme: rus-eus\n* model: transformer-align\n* source language(s): rus\n* target language(s): eus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.539",
"### System Info:\n\n\n* hf\\_name: rus-eus\n* source\\_languages: rus\n* target\\_languages: eus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'eu']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'eus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: eus\n* short\\_pair: ru-eu\n* chrF2\\_score: 0.539\n* bleu: 29.7\n* brevity\\_penalty: 0.9440000000000001\n* ref\\_len: 2373.0\n* src\\_name: Russian\n* tgt\\_name: Basque\n* train\\_date: 2020-06-16\n* src\\_alpha2: ru\n* tgt\\_alpha2: eu\n* prefer\\_old: False\n* long\\_pair: rus-eus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #ru #eu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-eus\n\n\n* source group: Russian\n* target group: Basque\n* OPUS readme: rus-eus\n* model: transformer-align\n* source language(s): rus\n* target language(s): eus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.539",
"### System Info:\n\n\n* hf\\_name: rus-eus\n* source\\_languages: rus\n* target\\_languages: eus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'eu']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'eus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: eus\n* short\\_pair: ru-eu\n* chrF2\\_score: 0.539\n* bleu: 29.7\n* brevity\\_penalty: 0.9440000000000001\n* ref\\_len: 2373.0\n* src\\_name: Russian\n* tgt\\_name: Basque\n* train\\_date: 2020-06-16\n* src\\_alpha2: ru\n* tgt\\_alpha2: eu\n* prefer\\_old: False\n* long\\_pair: rus-eus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
48,
134,
402
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #ru #eu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-eus\n\n\n* source group: Russian\n* target group: Basque\n* OPUS readme: rus-eus\n* model: transformer-align\n* source language(s): rus\n* target language(s): eus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.539### System Info:\n\n\n* hf\\_name: rus-eus\n* source\\_languages: rus\n* target\\_languages: eus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'eu']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'eus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: eus\n* short\\_pair: ru-eu\n* chrF2\\_score: 0.539\n* bleu: 29.7\n* brevity\\_penalty: 0.9440000000000001\n* ref\\_len: 2373.0\n* src\\_name: Russian\n* tgt\\_name: Basque\n* train\\_date: 2020-06-16\n* src\\_alpha2: ru\n* tgt\\_alpha2: eu\n* prefer\\_old: False\n* long\\_pair: rus-eus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ru-fi
* source languages: ru
* target languages: fi
* OPUS readme: [ru-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ru.fi | 40.1 | 0.646 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ru-fi
* source languages: ru
* target languages: fi
* OPUS readme: ru-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.1, chr-F: 0.646
| [
"### opus-mt-ru-fi\n\n\n* source languages: ru\n* target languages: fi\n* OPUS readme: ru-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.646"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ru-fi\n\n\n* source languages: ru\n* target languages: fi\n* OPUS readme: ru-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.646"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ru-fi\n\n\n* source languages: ru\n* target languages: fi\n* OPUS readme: ru-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.646"
] |
translation | transformers |
### opus-mt-ru-fr
* source languages: ru
* target languages: fr
* OPUS readme: [ru-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-fr/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fr/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fr/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.ru.fr | 18.3 | 0.497 |
| newstest2013.ru.fr | 21.6 | 0.516 |
| Tatoeba.ru.fr | 51.5 | 0.670 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-fr | null | [
"transformers",
"pytorch",
"tf",
"jax",
"marian",
"text2text-generation",
"translation",
"ru",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #jax #marian #text2text-generation #translation #ru #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ru-fr
* source languages: ru
* target languages: fr
* OPUS readme: ru-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 18.3, chr-F: 0.497
testset: URL, BLEU: 21.6, chr-F: 0.516
testset: URL, BLEU: 51.5, chr-F: 0.670
| [
"### opus-mt-ru-fr\n\n\n* source languages: ru\n* target languages: fr\n* OPUS readme: ru-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.3, chr-F: 0.497\ntestset: URL, BLEU: 21.6, chr-F: 0.516\ntestset: URL, BLEU: 51.5, chr-F: 0.670"
] | [
"TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #ru #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ru-fr\n\n\n* source languages: ru\n* target languages: fr\n* OPUS readme: ru-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.3, chr-F: 0.497\ntestset: URL, BLEU: 21.6, chr-F: 0.516\ntestset: URL, BLEU: 51.5, chr-F: 0.670"
] | [
53,
151
] | [
"TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #ru #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ru-fr\n\n\n* source languages: ru\n* target languages: fr\n* OPUS readme: ru-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.3, chr-F: 0.497\ntestset: URL, BLEU: 21.6, chr-F: 0.516\ntestset: URL, BLEU: 51.5, chr-F: 0.670"
] |
translation | transformers |
### ru-he
* source group: Russian
* target group: Hebrew
* OPUS readme: [rus-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-heb/README.md)
* model: transformer
* source language(s): rus
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.heb | 36.1 | 0.569 |
### System Info:
- hf_name: ru-he
- source_languages: rus
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'he']
- src_constituents: ('Russian', {'rus'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: rus-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-heb/opus-2020-10-04.test.txt
- src_alpha3: rus
- tgt_alpha3: heb
- chrF2_score: 0.569
- bleu: 36.1
- brevity_penalty: 0.9990000000000001
- ref_len: 15028.0
- src_name: Russian
- tgt_name: Hebrew
- train_date: 2020-10-04 00:00:00
- src_alpha2: ru
- tgt_alpha2: he
- prefer_old: False
- short_pair: ru-he
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: b0a907615aca0d728a9bc90f16caef0848f6a435
- port_machine: LM0-400-22516.local
- port_time: 2020-10-26-16:16 | {"language": ["ru", "he"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-he | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"he"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ru-he
* source group: Russian
* target group: Hebrew
* OPUS readme: rus-heb
* model: transformer
* source language(s): rus
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.1, chr-F: 0.569
### System Info:
* hf\_name: ru-he
* source\_languages: rus
* target\_languages: heb
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'he']
* src\_constituents: ('Russian', {'rus'})
* tgt\_constituents: ('Hebrew', {'heb'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: rus-heb
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: heb
* chrF2\_score: 0.569
* bleu: 36.1
* brevity\_penalty: 0.9990000000000001
* ref\_len: 15028.0
* src\_name: Russian
* tgt\_name: Hebrew
* train\_date: 2020-10-04 00:00:00
* src\_alpha2: ru
* tgt\_alpha2: he
* prefer\_old: False
* short\_pair: ru-he
* helsinki\_git\_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
* transformers\_git\_sha: b0a907615aca0d728a9bc90f16caef0848f6a435
* port\_machine: URL
* port\_time: 2020-10-26-16:16
| [
"### ru-he\n\n\n* source group: Russian\n* target group: Hebrew\n* OPUS readme: rus-heb\n* model: transformer\n* source language(s): rus\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.569",
"### System Info:\n\n\n* hf\\_name: ru-he\n* source\\_languages: rus\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'he']\n* src\\_constituents: ('Russian', {'rus'})\n* tgt\\_constituents: ('Hebrew', {'heb'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: rus-heb\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: heb\n* chrF2\\_score: 0.569\n* bleu: 36.1\n* brevity\\_penalty: 0.9990000000000001\n* ref\\_len: 15028.0\n* src\\_name: Russian\n* tgt\\_name: Hebrew\n* train\\_date: 2020-10-04 00:00:00\n* src\\_alpha2: ru\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* short\\_pair: ru-he\n* helsinki\\_git\\_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561\n* transformers\\_git\\_sha: b0a907615aca0d728a9bc90f16caef0848f6a435\n* port\\_machine: URL\n* port\\_time: 2020-10-26-16:16"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ru-he\n\n\n* source group: Russian\n* target group: Hebrew\n* OPUS readme: rus-heb\n* model: transformer\n* source language(s): rus\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.569",
"### System Info:\n\n\n* hf\\_name: ru-he\n* source\\_languages: rus\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'he']\n* src\\_constituents: ('Russian', {'rus'})\n* tgt\\_constituents: ('Hebrew', {'heb'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: rus-heb\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: heb\n* chrF2\\_score: 0.569\n* bleu: 36.1\n* brevity\\_penalty: 0.9990000000000001\n* ref\\_len: 15028.0\n* src\\_name: Russian\n* tgt\\_name: Hebrew\n* train\\_date: 2020-10-04 00:00:00\n* src\\_alpha2: ru\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* short\\_pair: ru-he\n* helsinki\\_git\\_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561\n* transformers\\_git\\_sha: b0a907615aca0d728a9bc90f16caef0848f6a435\n* port\\_machine: URL\n* port\\_time: 2020-10-26-16:16"
] | [
51,
129,
418
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ru-he\n\n\n* source group: Russian\n* target group: Hebrew\n* OPUS readme: rus-heb\n* model: transformer\n* source language(s): rus\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.569### System Info:\n\n\n* hf\\_name: ru-he\n* source\\_languages: rus\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'he']\n* src\\_constituents: ('Russian', {'rus'})\n* tgt\\_constituents: ('Hebrew', {'heb'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: rus-heb\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: heb\n* chrF2\\_score: 0.569\n* bleu: 36.1\n* brevity\\_penalty: 0.9990000000000001\n* ref\\_len: 15028.0\n* src\\_name: Russian\n* tgt\\_name: Hebrew\n* train\\_date: 2020-10-04 00:00:00\n* src\\_alpha2: ru\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* short\\_pair: ru-he\n* helsinki\\_git\\_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561\n* transformers\\_git\\_sha: b0a907615aca0d728a9bc90f16caef0848f6a435\n* port\\_machine: URL\n* port\\_time: 2020-10-26-16:16"
] |
translation | transformers |
### rus-hye
* source group: Russian
* target group: Armenian
* OPUS readme: [rus-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-hye/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): hye hye_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.hye | 21.7 | 0.494 |
### System Info:
- hf_name: rus-hye
- source_languages: rus
- target_languages: hye
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-hye/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'hy']
- src_constituents: {'rus'}
- tgt_constituents: {'hye', 'hye_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: hye
- short_pair: ru-hy
- chrF2_score: 0.494
- bleu: 21.7
- brevity_penalty: 1.0
- ref_len: 1602.0
- src_name: Russian
- tgt_name: Armenian
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: hy
- prefer_old: False
- long_pair: rus-hye
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "hy"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-hy | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"hy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"hy"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #hy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-hye
* source group: Russian
* target group: Armenian
* OPUS readme: rus-hye
* model: transformer-align
* source language(s): rus
* target language(s): hye hye\_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.7, chr-F: 0.494
### System Info:
* hf\_name: rus-hye
* source\_languages: rus
* target\_languages: hye
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'hy']
* src\_constituents: {'rus'}
* tgt\_constituents: {'hye', 'hye\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: hye
* short\_pair: ru-hy
* chrF2\_score: 0.494
* bleu: 21.7
* brevity\_penalty: 1.0
* ref\_len: 1602.0
* src\_name: Russian
* tgt\_name: Armenian
* train\_date: 2020-06-16
* src\_alpha2: ru
* tgt\_alpha2: hy
* prefer\_old: False
* long\_pair: rus-hye
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-hye\n\n\n* source group: Russian\n* target group: Armenian\n* OPUS readme: rus-hye\n* model: transformer-align\n* source language(s): rus\n* target language(s): hye hye\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.494",
"### System Info:\n\n\n* hf\\_name: rus-hye\n* source\\_languages: rus\n* target\\_languages: hye\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'hy']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'hye', 'hye\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: hye\n* short\\_pair: ru-hy\n* chrF2\\_score: 0.494\n* bleu: 21.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 1602.0\n* src\\_name: Russian\n* tgt\\_name: Armenian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ru\n* tgt\\_alpha2: hy\n* prefer\\_old: False\n* long\\_pair: rus-hye\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #hy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-hye\n\n\n* source group: Russian\n* target group: Armenian\n* OPUS readme: rus-hye\n* model: transformer-align\n* source language(s): rus\n* target language(s): hye hye\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.494",
"### System Info:\n\n\n* hf\\_name: rus-hye\n* source\\_languages: rus\n* target\\_languages: hye\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'hy']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'hye', 'hye\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: hye\n* short\\_pair: ru-hy\n* chrF2\\_score: 0.494\n* bleu: 21.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 1602.0\n* src\\_name: Russian\n* tgt\\_name: Armenian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ru\n* tgt\\_alpha2: hy\n* prefer\\_old: False\n* long\\_pair: rus-hye\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
168,
408
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #hy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-hye\n\n\n* source group: Russian\n* target group: Armenian\n* OPUS readme: rus-hye\n* model: transformer-align\n* source language(s): rus\n* target language(s): hye hye\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.494### System Info:\n\n\n* hf\\_name: rus-hye\n* source\\_languages: rus\n* target\\_languages: hye\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'hy']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'hye', 'hye\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: hye\n* short\\_pair: ru-hy\n* chrF2\\_score: 0.494\n* bleu: 21.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 1602.0\n* src\\_name: Russian\n* tgt\\_name: Armenian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ru\n* tgt\\_alpha2: hy\n* prefer\\_old: False\n* long\\_pair: rus-hye\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-lit
* source group: Russian
* target group: Lithuanian
* OPUS readme: [rus-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.lit | 43.5 | 0.675 |
### System Info:
- hf_name: rus-lit
- source_languages: rus
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'lt']
- src_constituents: {'rus'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: lit
- short_pair: ru-lt
- chrF2_score: 0.675
- bleu: 43.5
- brevity_penalty: 0.937
- ref_len: 14406.0
- src_name: Russian
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: lt
- prefer_old: False
- long_pair: rus-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "lt"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-lt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"lt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"lt"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-lit
* source group: Russian
* target group: Lithuanian
* OPUS readme: rus-lit
* model: transformer-align
* source language(s): rus
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 43.5, chr-F: 0.675
### System Info:
* hf\_name: rus-lit
* source\_languages: rus
* target\_languages: lit
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'lt']
* src\_constituents: {'rus'}
* tgt\_constituents: {'lit'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: lit
* short\_pair: ru-lt
* chrF2\_score: 0.675
* bleu: 43.5
* brevity\_penalty: 0.937
* ref\_len: 14406.0
* src\_name: Russian
* tgt\_name: Lithuanian
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: lt
* prefer\_old: False
* long\_pair: rus-lit
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-lit\n\n\n* source group: Russian\n* target group: Lithuanian\n* OPUS readme: rus-lit\n* model: transformer-align\n* source language(s): rus\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.5, chr-F: 0.675",
"### System Info:\n\n\n* hf\\_name: rus-lit\n* source\\_languages: rus\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'lt']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: lit\n* short\\_pair: ru-lt\n* chrF2\\_score: 0.675\n* bleu: 43.5\n* brevity\\_penalty: 0.937\n* ref\\_len: 14406.0\n* src\\_name: Russian\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: rus-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-lit\n\n\n* source group: Russian\n* target group: Lithuanian\n* OPUS readme: rus-lit\n* model: transformer-align\n* source language(s): rus\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.5, chr-F: 0.675",
"### System Info:\n\n\n* hf\\_name: rus-lit\n* source\\_languages: rus\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'lt']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: lit\n* short\\_pair: ru-lt\n* chrF2\\_score: 0.675\n* bleu: 43.5\n* brevity\\_penalty: 0.937\n* ref\\_len: 14406.0\n* src\\_name: Russian\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: rus-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
131,
392
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-lit\n\n\n* source group: Russian\n* target group: Lithuanian\n* OPUS readme: rus-lit\n* model: transformer-align\n* source language(s): rus\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.5, chr-F: 0.675### System Info:\n\n\n* hf\\_name: rus-lit\n* source\\_languages: rus\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'lt']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: lit\n* short\\_pair: ru-lt\n* chrF2\\_score: 0.675\n* bleu: 43.5\n* brevity\\_penalty: 0.937\n* ref\\_len: 14406.0\n* src\\_name: Russian\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: rus-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-lav
* source group: Russian
* target group: Latvian
* OPUS readme: [rus-lav](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lav/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): lav
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.lav | 50.0 | 0.696 |
### System Info:
- hf_name: rus-lav
- source_languages: rus
- target_languages: lav
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lav/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'lv']
- src_constituents: {'rus'}
- tgt_constituents: {'lav'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lav/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: lav
- short_pair: ru-lv
- chrF2_score: 0.696
- bleu: 50.0
- brevity_penalty: 0.968
- ref_len: 1518.0
- src_name: Russian
- tgt_name: Latvian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: lv
- prefer_old: False
- long_pair: rus-lav
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "lv"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-lv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"lv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"lv"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #lv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-lav
* source group: Russian
* target group: Latvian
* OPUS readme: rus-lav
* model: transformer-align
* source language(s): rus
* target language(s): lav
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 50.0, chr-F: 0.696
### System Info:
* hf\_name: rus-lav
* source\_languages: rus
* target\_languages: lav
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'lv']
* src\_constituents: {'rus'}
* tgt\_constituents: {'lav'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: lav
* short\_pair: ru-lv
* chrF2\_score: 0.696
* bleu: 50.0
* brevity\_penalty: 0.968
* ref\_len: 1518.0
* src\_name: Russian
* tgt\_name: Latvian
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: lv
* prefer\_old: False
* long\_pair: rus-lav
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-lav\n\n\n* source group: Russian\n* target group: Latvian\n* OPUS readme: rus-lav\n* model: transformer-align\n* source language(s): rus\n* target language(s): lav\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.0, chr-F: 0.696",
"### System Info:\n\n\n* hf\\_name: rus-lav\n* source\\_languages: rus\n* target\\_languages: lav\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'lv']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'lav'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: lav\n* short\\_pair: ru-lv\n* chrF2\\_score: 0.696\n* bleu: 50.0\n* brevity\\_penalty: 0.968\n* ref\\_len: 1518.0\n* src\\_name: Russian\n* tgt\\_name: Latvian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: lv\n* prefer\\_old: False\n* long\\_pair: rus-lav\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #lv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-lav\n\n\n* source group: Russian\n* target group: Latvian\n* OPUS readme: rus-lav\n* model: transformer-align\n* source language(s): rus\n* target language(s): lav\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.0, chr-F: 0.696",
"### System Info:\n\n\n* hf\\_name: rus-lav\n* source\\_languages: rus\n* target\\_languages: lav\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'lv']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'lav'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: lav\n* short\\_pair: ru-lv\n* chrF2\\_score: 0.696\n* bleu: 50.0\n* brevity\\_penalty: 0.968\n* ref\\_len: 1518.0\n* src\\_name: Russian\n* tgt\\_name: Latvian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: lv\n* prefer\\_old: False\n* long\\_pair: rus-lav\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
134,
399
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #lv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-lav\n\n\n* source group: Russian\n* target group: Latvian\n* OPUS readme: rus-lav\n* model: transformer-align\n* source language(s): rus\n* target language(s): lav\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.0, chr-F: 0.696### System Info:\n\n\n* hf\\_name: rus-lav\n* source\\_languages: rus\n* target\\_languages: lav\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'lv']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'lav'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: lav\n* short\\_pair: ru-lv\n* chrF2\\_score: 0.696\n* bleu: 50.0\n* brevity\\_penalty: 0.968\n* ref\\_len: 1518.0\n* src\\_name: Russian\n* tgt\\_name: Latvian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: lv\n* prefer\\_old: False\n* long\\_pair: rus-lav\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-nor
* source group: Russian
* target group: Norwegian
* OPUS readme: [rus-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.nor | 20.3 | 0.418 |
### System Info:
- hf_name: rus-nor
- source_languages: rus
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'no']
- src_constituents: {'rus'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: nor
- short_pair: ru-no
- chrF2_score: 0.418
- bleu: 20.3
- brevity_penalty: 0.946
- ref_len: 11686.0
- src_name: Russian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: no
- prefer_old: False
- long_pair: rus-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", false], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-no | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"no"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-nor
* source group: Russian
* target group: Norwegian
* OPUS readme: rus-nor
* model: transformer-align
* source language(s): rus
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.3, chr-F: 0.418
### System Info:
* hf\_name: rus-nor
* source\_languages: rus
* target\_languages: nor
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'no']
* src\_constituents: {'rus'}
* tgt\_constituents: {'nob', 'nno'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: nor
* short\_pair: ru-no
* chrF2\_score: 0.418
* bleu: 20.3
* brevity\_penalty: 0.946
* ref\_len: 11686.0
* src\_name: Russian
* tgt\_name: Norwegian
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: no
* prefer\_old: False
* long\_pair: rus-nor
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-nor\n\n\n* source group: Russian\n* target group: Norwegian\n* OPUS readme: rus-nor\n* model: transformer-align\n* source language(s): rus\n* target language(s): nno nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.3, chr-F: 0.418",
"### System Info:\n\n\n* hf\\_name: rus-nor\n* source\\_languages: rus\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'no']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: nor\n* short\\_pair: ru-no\n* chrF2\\_score: 0.418\n* bleu: 20.3\n* brevity\\_penalty: 0.946\n* ref\\_len: 11686.0\n* src\\_name: Russian\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: rus-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-nor\n\n\n* source group: Russian\n* target group: Norwegian\n* OPUS readme: rus-nor\n* model: transformer-align\n* source language(s): rus\n* target language(s): nno nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.3, chr-F: 0.418",
"### System Info:\n\n\n* hf\\_name: rus-nor\n* source\\_languages: rus\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'no']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: nor\n* short\\_pair: ru-no\n* chrF2\\_score: 0.418\n* bleu: 20.3\n* brevity\\_penalty: 0.946\n* ref\\_len: 11686.0\n* src\\_name: Russian\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: rus-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
161,
397
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-nor\n\n\n* source group: Russian\n* target group: Norwegian\n* OPUS readme: rus-nor\n* model: transformer-align\n* source language(s): rus\n* target language(s): nno nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.3, chr-F: 0.418### System Info:\n\n\n* hf\\_name: rus-nor\n* source\\_languages: rus\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'no']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: nor\n* short\\_pair: ru-no\n* chrF2\\_score: 0.418\n* bleu: 20.3\n* brevity\\_penalty: 0.946\n* ref\\_len: 11686.0\n* src\\_name: Russian\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: rus-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-slv
* source group: Russian
* target group: Slovenian
* OPUS readme: [rus-slv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-slv/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): slv
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.slv | 32.3 | 0.492 |
### System Info:
- hf_name: rus-slv
- source_languages: rus
- target_languages: slv
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-slv/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'sl']
- src_constituents: {'rus'}
- tgt_constituents: {'slv'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: slv
- short_pair: ru-sl
- chrF2_score: 0.49200000000000005
- bleu: 32.3
- brevity_penalty: 0.992
- ref_len: 2135.0
- src_name: Russian
- tgt_name: Slovenian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: sl
- prefer_old: False
- long_pair: rus-slv
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "sl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-sl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"sl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"sl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #sl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-slv
* source group: Russian
* target group: Slovenian
* OPUS readme: rus-slv
* model: transformer-align
* source language(s): rus
* target language(s): slv
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.3, chr-F: 0.492
### System Info:
* hf\_name: rus-slv
* source\_languages: rus
* target\_languages: slv
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'sl']
* src\_constituents: {'rus'}
* tgt\_constituents: {'slv'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: slv
* short\_pair: ru-sl
* chrF2\_score: 0.49200000000000005
* bleu: 32.3
* brevity\_penalty: 0.992
* ref\_len: 2135.0
* src\_name: Russian
* tgt\_name: Slovenian
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: sl
* prefer\_old: False
* long\_pair: rus-slv
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-slv\n\n\n* source group: Russian\n* target group: Slovenian\n* OPUS readme: rus-slv\n* model: transformer-align\n* source language(s): rus\n* target language(s): slv\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.492",
"### System Info:\n\n\n* hf\\_name: rus-slv\n* source\\_languages: rus\n* target\\_languages: slv\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'sl']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: slv\n* short\\_pair: ru-sl\n* chrF2\\_score: 0.49200000000000005\n* bleu: 32.3\n* brevity\\_penalty: 0.992\n* ref\\_len: 2135.0\n* src\\_name: Russian\n* tgt\\_name: Slovenian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: sl\n* prefer\\_old: False\n* long\\_pair: rus-slv\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #sl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-slv\n\n\n* source group: Russian\n* target group: Slovenian\n* OPUS readme: rus-slv\n* model: transformer-align\n* source language(s): rus\n* target language(s): slv\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.492",
"### System Info:\n\n\n* hf\\_name: rus-slv\n* source\\_languages: rus\n* target\\_languages: slv\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'sl']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: slv\n* short\\_pair: ru-sl\n* chrF2\\_score: 0.49200000000000005\n* bleu: 32.3\n* brevity\\_penalty: 0.992\n* ref\\_len: 2135.0\n* src\\_name: Russian\n* tgt\\_name: Slovenian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: sl\n* prefer\\_old: False\n* long\\_pair: rus-slv\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
403
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #sl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-slv\n\n\n* source group: Russian\n* target group: Slovenian\n* OPUS readme: rus-slv\n* model: transformer-align\n* source language(s): rus\n* target language(s): slv\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.492### System Info:\n\n\n* hf\\_name: rus-slv\n* source\\_languages: rus\n* target\\_languages: slv\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'sl']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: slv\n* short\\_pair: ru-sl\n* chrF2\\_score: 0.49200000000000005\n* bleu: 32.3\n* brevity\\_penalty: 0.992\n* ref\\_len: 2135.0\n* src\\_name: Russian\n* tgt\\_name: Slovenian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: sl\n* prefer\\_old: False\n* long\\_pair: rus-slv\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-swe
* source group: Russian
* target group: Swedish
* OPUS readme: [rus-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-swe/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.swe | 51.9 | 0.677 |
### System Info:
- hf_name: rus-swe
- source_languages: rus
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'sv']
- src_constituents: {'rus'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-swe/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: swe
- short_pair: ru-sv
- chrF2_score: 0.677
- bleu: 51.9
- brevity_penalty: 0.968
- ref_len: 8449.0
- src_name: Russian
- tgt_name: Swedish
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: sv
- prefer_old: False
- long_pair: rus-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "sv"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"sv"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-swe
* source group: Russian
* target group: Swedish
* OPUS readme: rus-swe
* model: transformer-align
* source language(s): rus
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 51.9, chr-F: 0.677
### System Info:
* hf\_name: rus-swe
* source\_languages: rus
* target\_languages: swe
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'sv']
* src\_constituents: {'rus'}
* tgt\_constituents: {'swe'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: swe
* short\_pair: ru-sv
* chrF2\_score: 0.677
* bleu: 51.9
* brevity\_penalty: 0.968
* ref\_len: 8449.0
* src\_name: Russian
* tgt\_name: Swedish
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: sv
* prefer\_old: False
* long\_pair: rus-swe
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-swe\n\n\n* source group: Russian\n* target group: Swedish\n* OPUS readme: rus-swe\n* model: transformer-align\n* source language(s): rus\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.9, chr-F: 0.677",
"### System Info:\n\n\n* hf\\_name: rus-swe\n* source\\_languages: rus\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'sv']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: swe\n* short\\_pair: ru-sv\n* chrF2\\_score: 0.677\n* bleu: 51.9\n* brevity\\_penalty: 0.968\n* ref\\_len: 8449.0\n* src\\_name: Russian\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: rus-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-swe\n\n\n* source group: Russian\n* target group: Swedish\n* OPUS readme: rus-swe\n* model: transformer-align\n* source language(s): rus\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.9, chr-F: 0.677",
"### System Info:\n\n\n* hf\\_name: rus-swe\n* source\\_languages: rus\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'sv']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: swe\n* short\\_pair: ru-sv\n* chrF2\\_score: 0.677\n* bleu: 51.9\n* brevity\\_penalty: 0.968\n* ref\\_len: 8449.0\n* src\\_name: Russian\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: rus-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
396
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-swe\n\n\n* source group: Russian\n* target group: Swedish\n* OPUS readme: rus-swe\n* model: transformer-align\n* source language(s): rus\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.9, chr-F: 0.677### System Info:\n\n\n* hf\\_name: rus-swe\n* source\\_languages: rus\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'sv']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: swe\n* short\\_pair: ru-sv\n* chrF2\\_score: 0.677\n* bleu: 51.9\n* brevity\\_penalty: 0.968\n* ref\\_len: 8449.0\n* src\\_name: Russian\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: rus-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-ukr
* source group: Russian
* target group: Ukrainian
* OPUS readme: [rus-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ukr/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.ukr | 64.0 | 0.793 |
### System Info:
- hf_name: rus-ukr
- source_languages: rus
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'uk']
- src_constituents: {'rus'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ukr/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: ukr
- short_pair: ru-uk
- chrF2_score: 0.7929999999999999
- bleu: 64.0
- brevity_penalty: 0.99
- ref_len: 60212.0
- src_name: Russian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: uk
- prefer_old: False
- long_pair: rus-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ru",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"uk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ru #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-ukr
* source group: Russian
* target group: Ukrainian
* OPUS readme: rus-ukr
* model: transformer-align
* source language(s): rus
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 64.0, chr-F: 0.793
### System Info:
* hf\_name: rus-ukr
* source\_languages: rus
* target\_languages: ukr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'uk']
* src\_constituents: {'rus'}
* tgt\_constituents: {'ukr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: ukr
* short\_pair: ru-uk
* chrF2\_score: 0.7929999999999999
* bleu: 64.0
* brevity\_penalty: 0.99
* ref\_len: 60212.0
* src\_name: Russian
* tgt\_name: Ukrainian
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: uk
* prefer\_old: False
* long\_pair: rus-ukr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-ukr\n\n\n* source group: Russian\n* target group: Ukrainian\n* OPUS readme: rus-ukr\n* model: transformer-align\n* source language(s): rus\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 64.0, chr-F: 0.793",
"### System Info:\n\n\n* hf\\_name: rus-ukr\n* source\\_languages: rus\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'uk']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: ukr\n* short\\_pair: ru-uk\n* chrF2\\_score: 0.7929999999999999\n* bleu: 64.0\n* brevity\\_penalty: 0.99\n* ref\\_len: 60212.0\n* src\\_name: Russian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: rus-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-ukr\n\n\n* source group: Russian\n* target group: Ukrainian\n* OPUS readme: rus-ukr\n* model: transformer-align\n* source language(s): rus\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 64.0, chr-F: 0.793",
"### System Info:\n\n\n* hf\\_name: rus-ukr\n* source\\_languages: rus\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'uk']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: ukr\n* short\\_pair: ru-uk\n* chrF2\\_score: 0.7929999999999999\n* bleu: 64.0\n* brevity\\_penalty: 0.99\n* ref\\_len: 60212.0\n* src\\_name: Russian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: rus-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
408
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ru #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-ukr\n\n\n* source group: Russian\n* target group: Ukrainian\n* OPUS readme: rus-ukr\n* model: transformer-align\n* source language(s): rus\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 64.0, chr-F: 0.793### System Info:\n\n\n* hf\\_name: rus-ukr\n* source\\_languages: rus\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'uk']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: ukr\n* short\\_pair: ru-uk\n* chrF2\\_score: 0.7929999999999999\n* bleu: 64.0\n* brevity\\_penalty: 0.99\n* ref\\_len: 60212.0\n* src\\_name: Russian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: rus-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### rus-vie
* source group: Russian
* target group: Vietnamese
* OPUS readme: [rus-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-vie/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.vie | 16.9 | 0.346 |
### System Info:
- hf_name: rus-vie
- source_languages: rus
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'vi']
- src_constituents: {'rus'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-vie/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: vie
- short_pair: ru-vi
- chrF2_score: 0.34600000000000003
- bleu: 16.9
- brevity_penalty: 1.0
- ref_len: 2566.0
- src_name: Russian
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: vi
- prefer_old: False
- long_pair: rus-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ru", "vi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ru-vi | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"ru",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ru",
"vi"
] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #ru #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### rus-vie
* source group: Russian
* target group: Vietnamese
* OPUS readme: rus-vie
* model: transformer-align
* source language(s): rus
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 16.9, chr-F: 0.346
### System Info:
* hf\_name: rus-vie
* source\_languages: rus
* target\_languages: vie
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ru', 'vi']
* src\_constituents: {'rus'}
* tgt\_constituents: {'vie', 'vie\_Hani'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: rus
* tgt\_alpha3: vie
* short\_pair: ru-vi
* chrF2\_score: 0.34600000000000003
* bleu: 16.9
* brevity\_penalty: 1.0
* ref\_len: 2566.0
* src\_name: Russian
* tgt\_name: Vietnamese
* train\_date: 2020-06-17
* src\_alpha2: ru
* tgt\_alpha2: vi
* prefer\_old: False
* long\_pair: rus-vie
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### rus-vie\n\n\n* source group: Russian\n* target group: Vietnamese\n* OPUS readme: rus-vie\n* model: transformer-align\n* source language(s): rus\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.9, chr-F: 0.346",
"### System Info:\n\n\n* hf\\_name: rus-vie\n* source\\_languages: rus\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'vi']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: vie\n* short\\_pair: ru-vi\n* chrF2\\_score: 0.34600000000000003\n* bleu: 16.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 2566.0\n* src\\_name: Russian\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: rus-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #ru #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### rus-vie\n\n\n* source group: Russian\n* target group: Vietnamese\n* OPUS readme: rus-vie\n* model: transformer-align\n* source language(s): rus\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.9, chr-F: 0.346",
"### System Info:\n\n\n* hf\\_name: rus-vie\n* source\\_languages: rus\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'vi']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: vie\n* short\\_pair: ru-vi\n* chrF2\\_score: 0.34600000000000003\n* bleu: 16.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 2566.0\n* src\\_name: Russian\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: rus-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
55,
131,
405
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #ru #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### rus-vie\n\n\n* source group: Russian\n* target group: Vietnamese\n* OPUS readme: rus-vie\n* model: transformer-align\n* source language(s): rus\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.9, chr-F: 0.346### System Info:\n\n\n* hf\\_name: rus-vie\n* source\\_languages: rus\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ru', 'vi']\n* src\\_constituents: {'rus'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: rus\n* tgt\\_alpha3: vie\n* short\\_pair: ru-vi\n* chrF2\\_score: 0.34600000000000003\n* bleu: 16.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 2566.0\n* src\\_name: Russian\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ru\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: rus-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-run-en
* source languages: run
* target languages: en
* OPUS readme: [run-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/run-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/run-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.run.en | 42.7 | 0.583 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-run-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"run",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #run #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-run-en
* source languages: run
* target languages: en
* OPUS readme: run-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 42.7, chr-F: 0.583
| [
"### opus-mt-run-en\n\n\n* source languages: run\n* target languages: en\n* OPUS readme: run-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.7, chr-F: 0.583"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #run #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-run-en\n\n\n* source languages: run\n* target languages: en\n* OPUS readme: run-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.7, chr-F: 0.583"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #run #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-run-en\n\n\n* source languages: run\n* target languages: en\n* OPUS readme: run-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.7, chr-F: 0.583"
] |
translation | transformers |
### opus-mt-run-es
* source languages: run
* target languages: es
* OPUS readme: [run-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/run-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/run-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.run.es | 26.9 | 0.452 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-run-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"run",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #run #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-run-es
* source languages: run
* target languages: es
* OPUS readme: run-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.9, chr-F: 0.452
| [
"### opus-mt-run-es\n\n\n* source languages: run\n* target languages: es\n* OPUS readme: run-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.452"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #run #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-run-es\n\n\n* source languages: run\n* target languages: es\n* OPUS readme: run-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.452"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #run #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-run-es\n\n\n* source languages: run\n* target languages: es\n* OPUS readme: run-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.452"
] |
translation | transformers |
### opus-mt-run-sv
* source languages: run
* target languages: sv
* OPUS readme: [run-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/run-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/run-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/run-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.run.sv | 30.1 | 0.484 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-run-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"run",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #run #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-run-sv
* source languages: run
* target languages: sv
* OPUS readme: run-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.1, chr-F: 0.484
| [
"### opus-mt-run-sv\n\n\n* source languages: run\n* target languages: sv\n* OPUS readme: run-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.484"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #run #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-run-sv\n\n\n* source languages: run\n* target languages: sv\n* OPUS readme: run-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.484"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #run #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-run-sv\n\n\n* source languages: run\n* target languages: sv\n* OPUS readme: run-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.484"
] |
translation | transformers |
### opus-mt-rw-en
* source languages: rw
* target languages: en
* OPUS readme: [rw-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rw-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rw-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rw.en | 37.3 | 0.530 |
| Tatoeba.rw.en | 49.8 | 0.643 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rw-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rw",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rw #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-rw-en
* source languages: rw
* target languages: en
* OPUS readme: rw-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.3, chr-F: 0.530
testset: URL, BLEU: 49.8, chr-F: 0.643
| [
"### opus-mt-rw-en\n\n\n* source languages: rw\n* target languages: en\n* OPUS readme: rw-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.530\ntestset: URL, BLEU: 49.8, chr-F: 0.643"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rw #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-rw-en\n\n\n* source languages: rw\n* target languages: en\n* OPUS readme: rw-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.530\ntestset: URL, BLEU: 49.8, chr-F: 0.643"
] | [
52,
131
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rw #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-rw-en\n\n\n* source languages: rw\n* target languages: en\n* OPUS readme: rw-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.530\ntestset: URL, BLEU: 49.8, chr-F: 0.643"
] |
translation | transformers |
### opus-mt-rw-es
* source languages: rw
* target languages: es
* OPUS readme: [rw-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rw-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rw-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rw.es | 26.2 | 0.445 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rw-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rw",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rw #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-rw-es
* source languages: rw
* target languages: es
* OPUS readme: rw-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.2, chr-F: 0.445
| [
"### opus-mt-rw-es\n\n\n* source languages: rw\n* target languages: es\n* OPUS readme: rw-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.2, chr-F: 0.445"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rw #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-rw-es\n\n\n* source languages: rw\n* target languages: es\n* OPUS readme: rw-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.2, chr-F: 0.445"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rw #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-rw-es\n\n\n* source languages: rw\n* target languages: es\n* OPUS readme: rw-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.2, chr-F: 0.445"
] |
translation | transformers |
### opus-mt-rw-fr
* source languages: rw
* target languages: fr
* OPUS readme: [rw-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rw-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rw-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rw.fr | 26.7 | 0.443 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rw-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rw",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rw #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-rw-fr
* source languages: rw
* target languages: fr
* OPUS readme: rw-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.7, chr-F: 0.443
| [
"### opus-mt-rw-fr\n\n\n* source languages: rw\n* target languages: fr\n* OPUS readme: rw-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.443"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rw #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-rw-fr\n\n\n* source languages: rw\n* target languages: fr\n* OPUS readme: rw-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.443"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rw #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-rw-fr\n\n\n* source languages: rw\n* target languages: fr\n* OPUS readme: rw-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.443"
] |
translation | transformers |
### opus-mt-rw-sv
* source languages: rw
* target languages: sv
* OPUS readme: [rw-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/rw-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/rw-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/rw-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.rw.sv | 29.1 | 0.476 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-rw-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"rw",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #rw #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-rw-sv
* source languages: rw
* target languages: sv
* OPUS readme: rw-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.1, chr-F: 0.476
| [
"### opus-mt-rw-sv\n\n\n* source languages: rw\n* target languages: sv\n* OPUS readme: rw-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.476"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rw #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-rw-sv\n\n\n* source languages: rw\n* target languages: sv\n* OPUS readme: rw-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.476"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #rw #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-rw-sv\n\n\n* source languages: rw\n* target languages: sv\n* OPUS readme: rw-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.476"
] |
translation | transformers |
### sal-eng
* source group: Salishan languages
* target group: English
* OPUS readme: [sal-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sal-eng/README.md)
* model: transformer
* source language(s): shs_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-14.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.zip)
* test set translations: [opus-2020-07-14.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.test.txt)
* test set scores: [opus-2020-07-14.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.multi.eng | 38.7 | 0.572 |
| Tatoeba-test.shs.eng | 2.2 | 0.097 |
| Tatoeba-test.shs-eng.shs.eng | 2.2 | 0.097 |
### System Info:
- hf_name: sal-eng
- source_languages: sal
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sal-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sal', 'en']
- src_constituents: {'shs_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sal-eng/opus-2020-07-14.test.txt
- src_alpha3: sal
- tgt_alpha3: eng
- short_pair: sal-en
- chrF2_score: 0.09699999999999999
- bleu: 2.2
- brevity_penalty: 0.8190000000000001
- ref_len: 222.0
- src_name: Salishan languages
- tgt_name: English
- train_date: 2020-07-14
- src_alpha2: sal
- tgt_alpha2: en
- prefer_old: False
- long_pair: sal-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["sal", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sal-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sal",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sal",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sal #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### sal-eng
* source group: Salishan languages
* target group: English
* OPUS readme: sal-eng
* model: transformer
* source language(s): shs\_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.7, chr-F: 0.572
testset: URL, BLEU: 2.2, chr-F: 0.097
testset: URL, BLEU: 2.2, chr-F: 0.097
### System Info:
* hf\_name: sal-eng
* source\_languages: sal
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['sal', 'en']
* src\_constituents: {'shs\_Latn'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: sal
* tgt\_alpha3: eng
* short\_pair: sal-en
* chrF2\_score: 0.09699999999999999
* bleu: 2.2
* brevity\_penalty: 0.8190000000000001
* ref\_len: 222.0
* src\_name: Salishan languages
* tgt\_name: English
* train\_date: 2020-07-14
* src\_alpha2: sal
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: sal-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### sal-eng\n\n\n* source group: Salishan languages\n* target group: English\n* OPUS readme: sal-eng\n* model: transformer\n* source language(s): shs\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.7, chr-F: 0.572\ntestset: URL, BLEU: 2.2, chr-F: 0.097\ntestset: URL, BLEU: 2.2, chr-F: 0.097",
"### System Info:\n\n\n* hf\\_name: sal-eng\n* source\\_languages: sal\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sal', 'en']\n* src\\_constituents: {'shs\\_Latn'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sal\n* tgt\\_alpha3: eng\n* short\\_pair: sal-en\n* chrF2\\_score: 0.09699999999999999\n* bleu: 2.2\n* brevity\\_penalty: 0.8190000000000001\n* ref\\_len: 222.0\n* src\\_name: Salishan languages\n* tgt\\_name: English\n* train\\_date: 2020-07-14\n* src\\_alpha2: sal\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: sal-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sal #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### sal-eng\n\n\n* source group: Salishan languages\n* target group: English\n* OPUS readme: sal-eng\n* model: transformer\n* source language(s): shs\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.7, chr-F: 0.572\ntestset: URL, BLEU: 2.2, chr-F: 0.097\ntestset: URL, BLEU: 2.2, chr-F: 0.097",
"### System Info:\n\n\n* hf\\_name: sal-eng\n* source\\_languages: sal\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sal', 'en']\n* src\\_constituents: {'shs\\_Latn'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sal\n* tgt\\_alpha3: eng\n* short\\_pair: sal-en\n* chrF2\\_score: 0.09699999999999999\n* bleu: 2.2\n* brevity\\_penalty: 0.8190000000000001\n* ref\\_len: 222.0\n* src\\_name: Salishan languages\n* tgt\\_name: English\n* train\\_date: 2020-07-14\n* src\\_alpha2: sal\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: sal-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
182,
419
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sal #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### sal-eng\n\n\n* source group: Salishan languages\n* target group: English\n* OPUS readme: sal-eng\n* model: transformer\n* source language(s): shs\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.7, chr-F: 0.572\ntestset: URL, BLEU: 2.2, chr-F: 0.097\ntestset: URL, BLEU: 2.2, chr-F: 0.097### System Info:\n\n\n* hf\\_name: sal-eng\n* source\\_languages: sal\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sal', 'en']\n* src\\_constituents: {'shs\\_Latn'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sal\n* tgt\\_alpha3: eng\n* short\\_pair: sal-en\n* chrF2\\_score: 0.09699999999999999\n* bleu: 2.2\n* brevity\\_penalty: 0.8190000000000001\n* ref\\_len: 222.0\n* src\\_name: Salishan languages\n* tgt\\_name: English\n* train\\_date: 2020-07-14\n* src\\_alpha2: sal\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: sal-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### sem-eng
* source group: Semitic languages
* target group: English
* OPUS readme: [sem-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-eng/README.md)
* model: transformer
* source language(s): acm afb amh apc ara arq ary arz heb mlt tir
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.amh-eng.amh.eng | 37.5 | 0.565 |
| Tatoeba-test.ara-eng.ara.eng | 38.9 | 0.566 |
| Tatoeba-test.heb-eng.heb.eng | 44.6 | 0.610 |
| Tatoeba-test.mlt-eng.mlt.eng | 53.7 | 0.688 |
| Tatoeba-test.multi.eng | 41.7 | 0.588 |
| Tatoeba-test.tir-eng.tir.eng | 18.3 | 0.370 |
### System Info:
- hf_name: sem-eng
- source_languages: sem
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem', 'en']
- src_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-eng/opus2m-2020-08-01.test.txt
- src_alpha3: sem
- tgt_alpha3: eng
- short_pair: sem-en
- chrF2_score: 0.588
- bleu: 41.7
- brevity_penalty: 0.987
- ref_len: 72950.0
- src_name: Semitic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: sem
- tgt_alpha2: en
- prefer_old: False
- long_pair: sem-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["mt", "ar", "he", "ti", "am", "sem", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sem-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mt",
"ar",
"he",
"ti",
"am",
"sem",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"mt",
"ar",
"he",
"ti",
"am",
"sem",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #mt #ar #he #ti #am #sem #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### sem-eng
* source group: Semitic languages
* target group: English
* OPUS readme: sem-eng
* model: transformer
* source language(s): acm afb amh apc ara arq ary arz heb mlt tir
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.5, chr-F: 0.565
testset: URL, BLEU: 38.9, chr-F: 0.566
testset: URL, BLEU: 44.6, chr-F: 0.610
testset: URL, BLEU: 53.7, chr-F: 0.688
testset: URL, BLEU: 41.7, chr-F: 0.588
testset: URL, BLEU: 18.3, chr-F: 0.370
### System Info:
* hf\_name: sem-eng
* source\_languages: sem
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem', 'en']
* src\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: sem
* tgt\_alpha3: eng
* short\_pair: sem-en
* chrF2\_score: 0.588
* bleu: 41.7
* brevity\_penalty: 0.987
* ref\_len: 72950.0
* src\_name: Semitic languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: sem
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: sem-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### sem-eng\n\n\n* source group: Semitic languages\n* target group: English\n* OPUS readme: sem-eng\n* model: transformer\n* source language(s): acm afb amh apc ara arq ary arz heb mlt tir\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.5, chr-F: 0.565\ntestset: URL, BLEU: 38.9, chr-F: 0.566\ntestset: URL, BLEU: 44.6, chr-F: 0.610\ntestset: URL, BLEU: 53.7, chr-F: 0.688\ntestset: URL, BLEU: 41.7, chr-F: 0.588\ntestset: URL, BLEU: 18.3, chr-F: 0.370",
"### System Info:\n\n\n* hf\\_name: sem-eng\n* source\\_languages: sem\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem', 'en']\n* src\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sem\n* tgt\\_alpha3: eng\n* short\\_pair: sem-en\n* chrF2\\_score: 0.588\n* bleu: 41.7\n* brevity\\_penalty: 0.987\n* ref\\_len: 72950.0\n* src\\_name: Semitic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: sem\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: sem-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #ar #he #ti #am #sem #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### sem-eng\n\n\n* source group: Semitic languages\n* target group: English\n* OPUS readme: sem-eng\n* model: transformer\n* source language(s): acm afb amh apc ara arq ary arz heb mlt tir\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.5, chr-F: 0.565\ntestset: URL, BLEU: 38.9, chr-F: 0.566\ntestset: URL, BLEU: 44.6, chr-F: 0.610\ntestset: URL, BLEU: 53.7, chr-F: 0.688\ntestset: URL, BLEU: 41.7, chr-F: 0.588\ntestset: URL, BLEU: 18.3, chr-F: 0.370",
"### System Info:\n\n\n* hf\\_name: sem-eng\n* source\\_languages: sem\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem', 'en']\n* src\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sem\n* tgt\\_alpha3: eng\n* short\\_pair: sem-en\n* chrF2\\_score: 0.588\n* bleu: 41.7\n* brevity\\_penalty: 0.987\n* ref\\_len: 72950.0\n* src\\_name: Semitic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: sem\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: sem-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
62,
262,
469
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #ar #he #ti #am #sem #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### sem-eng\n\n\n* source group: Semitic languages\n* target group: English\n* OPUS readme: sem-eng\n* model: transformer\n* source language(s): acm afb amh apc ara arq ary arz heb mlt tir\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.5, chr-F: 0.565\ntestset: URL, BLEU: 38.9, chr-F: 0.566\ntestset: URL, BLEU: 44.6, chr-F: 0.610\ntestset: URL, BLEU: 53.7, chr-F: 0.688\ntestset: URL, BLEU: 41.7, chr-F: 0.588\ntestset: URL, BLEU: 18.3, chr-F: 0.370### System Info:\n\n\n* hf\\_name: sem-eng\n* source\\_languages: sem\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem', 'en']\n* src\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sem\n* tgt\\_alpha3: eng\n* short\\_pair: sem-en\n* chrF2\\_score: 0.588\n* bleu: 41.7\n* brevity\\_penalty: 0.987\n* ref\\_len: 72950.0\n* src\\_name: Semitic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: sem\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: sem-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### sem-sem
* source group: Semitic languages
* target group: Semitic languages
* OPUS readme: [sem-sem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-sem/README.md)
* model: transformer
* source language(s): apc ara arq arz heb mlt
* target language(s): apc ara arq arz heb mlt
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara-ara.ara.ara | 4.2 | 0.200 |
| Tatoeba-test.ara-heb.ara.heb | 34.0 | 0.542 |
| Tatoeba-test.ara-mlt.ara.mlt | 16.6 | 0.513 |
| Tatoeba-test.heb-ara.heb.ara | 18.8 | 0.477 |
| Tatoeba-test.mlt-ara.mlt.ara | 20.7 | 0.388 |
| Tatoeba-test.multi.multi | 27.1 | 0.507 |
### System Info:
- hf_name: sem-sem
- source_languages: sem
- target_languages: sem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-sem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem']
- src_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- tgt_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.test.txt
- src_alpha3: sem
- tgt_alpha3: sem
- short_pair: sem-sem
- chrF2_score: 0.507
- bleu: 27.1
- brevity_penalty: 0.972
- ref_len: 13472.0
- src_name: Semitic languages
- tgt_name: Semitic languages
- train_date: 2020-07-27
- src_alpha2: sem
- tgt_alpha2: sem
- prefer_old: False
- long_pair: sem-sem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["mt", "ar", "he", "ti", "am", "sem"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sem-sem | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"mt",
"ar",
"he",
"ti",
"am",
"sem",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"mt",
"ar",
"he",
"ti",
"am",
"sem"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #mt #ar #he #ti #am #sem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### sem-sem
* source group: Semitic languages
* target group: Semitic languages
* OPUS readme: sem-sem
* model: transformer
* source language(s): apc ara arq arz heb mlt
* target language(s): apc ara arq arz heb mlt
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 4.2, chr-F: 0.200
testset: URL, BLEU: 34.0, chr-F: 0.542
testset: URL, BLEU: 16.6, chr-F: 0.513
testset: URL, BLEU: 18.8, chr-F: 0.477
testset: URL, BLEU: 20.7, chr-F: 0.388
testset: URL, BLEU: 27.1, chr-F: 0.507
### System Info:
* hf\_name: sem-sem
* source\_languages: sem
* target\_languages: sem
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem']
* src\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
* tgt\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: sem
* tgt\_alpha3: sem
* short\_pair: sem-sem
* chrF2\_score: 0.507
* bleu: 27.1
* brevity\_penalty: 0.972
* ref\_len: 13472.0
* src\_name: Semitic languages
* tgt\_name: Semitic languages
* train\_date: 2020-07-27
* src\_alpha2: sem
* tgt\_alpha2: sem
* prefer\_old: False
* long\_pair: sem-sem
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### sem-sem\n\n\n* source group: Semitic languages\n* target group: Semitic languages\n* OPUS readme: sem-sem\n* model: transformer\n* source language(s): apc ara arq arz heb mlt\n* target language(s): apc ara arq arz heb mlt\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.2, chr-F: 0.200\ntestset: URL, BLEU: 34.0, chr-F: 0.542\ntestset: URL, BLEU: 16.6, chr-F: 0.513\ntestset: URL, BLEU: 18.8, chr-F: 0.477\ntestset: URL, BLEU: 20.7, chr-F: 0.388\ntestset: URL, BLEU: 27.1, chr-F: 0.507",
"### System Info:\n\n\n* hf\\_name: sem-sem\n* source\\_languages: sem\n* target\\_languages: sem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem']\n* src\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* tgt\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sem\n* tgt\\_alpha3: sem\n* short\\_pair: sem-sem\n* chrF2\\_score: 0.507\n* bleu: 27.1\n* brevity\\_penalty: 0.972\n* ref\\_len: 13472.0\n* src\\_name: Semitic languages\n* tgt\\_name: Semitic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: sem\n* tgt\\_alpha2: sem\n* prefer\\_old: False\n* long\\_pair: sem-sem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #ar #he #ti #am #sem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### sem-sem\n\n\n* source group: Semitic languages\n* target group: Semitic languages\n* OPUS readme: sem-sem\n* model: transformer\n* source language(s): apc ara arq arz heb mlt\n* target language(s): apc ara arq arz heb mlt\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.2, chr-F: 0.200\ntestset: URL, BLEU: 34.0, chr-F: 0.542\ntestset: URL, BLEU: 16.6, chr-F: 0.513\ntestset: URL, BLEU: 18.8, chr-F: 0.477\ntestset: URL, BLEU: 20.7, chr-F: 0.388\ntestset: URL, BLEU: 27.1, chr-F: 0.507",
"### System Info:\n\n\n* hf\\_name: sem-sem\n* source\\_languages: sem\n* target\\_languages: sem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem']\n* src\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* tgt\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sem\n* tgt\\_alpha3: sem\n* short\\_pair: sem-sem\n* chrF2\\_score: 0.507\n* bleu: 27.1\n* brevity\\_penalty: 0.972\n* ref\\_len: 13472.0\n* src\\_name: Semitic languages\n* tgt\\_name: Semitic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: sem\n* tgt\\_alpha2: sem\n* prefer\\_old: False\n* long\\_pair: sem-sem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
60,
294,
521
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #ar #he #ti #am #sem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### sem-sem\n\n\n* source group: Semitic languages\n* target group: Semitic languages\n* OPUS readme: sem-sem\n* model: transformer\n* source language(s): apc ara arq arz heb mlt\n* target language(s): apc ara arq arz heb mlt\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.2, chr-F: 0.200\ntestset: URL, BLEU: 34.0, chr-F: 0.542\ntestset: URL, BLEU: 16.6, chr-F: 0.513\ntestset: URL, BLEU: 18.8, chr-F: 0.477\ntestset: URL, BLEU: 20.7, chr-F: 0.388\ntestset: URL, BLEU: 27.1, chr-F: 0.507### System Info:\n\n\n* hf\\_name: sem-sem\n* source\\_languages: sem\n* target\\_languages: sem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem']\n* src\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* tgt\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sem\n* tgt\\_alpha3: sem\n* short\\_pair: sem-sem\n* chrF2\\_score: 0.507\n* bleu: 27.1\n* brevity\\_penalty: 0.972\n* ref\\_len: 13472.0\n* src\\_name: Semitic languages\n* tgt\\_name: Semitic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: sem\n* tgt\\_alpha2: sem\n* prefer\\_old: False\n* long\\_pair: sem-sem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-sg-en
* source languages: sg
* target languages: en
* OPUS readme: [sg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.en | 32.0 | 0.477 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sg-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sg",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sg-en
* source languages: sg
* target languages: en
* OPUS readme: sg-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.0, chr-F: 0.477
| [
"### opus-mt-sg-en\n\n\n* source languages: sg\n* target languages: en\n* OPUS readme: sg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.0, chr-F: 0.477"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sg-en\n\n\n* source languages: sg\n* target languages: en\n* OPUS readme: sg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.0, chr-F: 0.477"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sg-en\n\n\n* source languages: sg\n* target languages: en\n* OPUS readme: sg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.0, chr-F: 0.477"
] |
translation | transformers |
### opus-mt-sg-es
* source languages: sg
* target languages: es
* OPUS readme: [sg-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.es | 21.3 | 0.385 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sg-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sg",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sg-es
* source languages: sg
* target languages: es
* OPUS readme: sg-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.3, chr-F: 0.385
| [
"### opus-mt-sg-es\n\n\n* source languages: sg\n* target languages: es\n* OPUS readme: sg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.3, chr-F: 0.385"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sg-es\n\n\n* source languages: sg\n* target languages: es\n* OPUS readme: sg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.3, chr-F: 0.385"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sg-es\n\n\n* source languages: sg\n* target languages: es\n* OPUS readme: sg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.3, chr-F: 0.385"
] |
translation | transformers |
### opus-mt-sg-fi
* source languages: sg
* target languages: fi
* OPUS readme: [sg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.fi | 22.7 | 0.438 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sg-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sg",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sg #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sg-fi
* source languages: sg
* target languages: fi
* OPUS readme: sg-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.7, chr-F: 0.438
| [
"### opus-mt-sg-fi\n\n\n* source languages: sg\n* target languages: fi\n* OPUS readme: sg-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.438"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sg-fi\n\n\n* source languages: sg\n* target languages: fi\n* OPUS readme: sg-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.438"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sg-fi\n\n\n* source languages: sg\n* target languages: fi\n* OPUS readme: sg-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.438"
] |
translation | transformers |
### opus-mt-sg-fr
* source languages: sg
* target languages: fr
* OPUS readme: [sg-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-fr/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fr/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fr/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.fr | 24.9 | 0.420 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sg-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sg",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sg-fr
* source languages: sg
* target languages: fr
* OPUS readme: sg-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.9, chr-F: 0.420
| [
"### opus-mt-sg-fr\n\n\n* source languages: sg\n* target languages: fr\n* OPUS readme: sg-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.420"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sg-fr\n\n\n* source languages: sg\n* target languages: fr\n* OPUS readme: sg-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.420"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sg-fr\n\n\n* source languages: sg\n* target languages: fr\n* OPUS readme: sg-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.420"
] |
translation | transformers |
### opus-mt-sg-sv
* source languages: sg
* target languages: sv
* OPUS readme: [sg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-sv/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-sv/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-sv/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.sv | 25.3 | 0.428 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sg-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sg",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sg-sv
* source languages: sg
* target languages: sv
* OPUS readme: sg-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.3, chr-F: 0.428
| [
"### opus-mt-sg-sv\n\n\n* source languages: sg\n* target languages: sv\n* OPUS readme: sg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.428"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sg-sv\n\n\n* source languages: sg\n* target languages: sv\n* OPUS readme: sg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.428"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sg-sv\n\n\n* source languages: sg\n* target languages: sv\n* OPUS readme: sg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.428"
] |
translation | transformers |
### hbs-epo
* source group: Serbo-Croatian
* target group: Esperanto
* OPUS readme: [hbs-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-epo/README.md)
* model: transformer-align
* source language(s): bos_Latn hrv srp_Cyrl srp_Latn
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hbs.epo | 18.7 | 0.383 |
### System Info:
- hf_name: hbs-epo
- source_languages: hbs
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sh', 'eo']
- src_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.test.txt
- src_alpha3: hbs
- tgt_alpha3: epo
- short_pair: sh-eo
- chrF2_score: 0.38299999999999995
- bleu: 18.7
- brevity_penalty: 0.9990000000000001
- ref_len: 18457.0
- src_name: Serbo-Croatian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: sh
- tgt_alpha2: eo
- prefer_old: False
- long_pair: hbs-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["sh", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sh-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sh",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sh",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sh #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### hbs-epo
* source group: Serbo-Croatian
* target group: Esperanto
* OPUS readme: hbs-epo
* model: transformer-align
* source language(s): bos\_Latn hrv srp\_Cyrl srp\_Latn
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 18.7, chr-F: 0.383
### System Info:
* hf\_name: hbs-epo
* source\_languages: hbs
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['sh', 'eo']
* src\_constituents: {'hrv', 'srp\_Cyrl', 'bos\_Latn', 'srp\_Latn'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: hbs
* tgt\_alpha3: epo
* short\_pair: sh-eo
* chrF2\_score: 0.38299999999999995
* bleu: 18.7
* brevity\_penalty: 0.9990000000000001
* ref\_len: 18457.0
* src\_name: Serbo-Croatian
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: sh
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: hbs-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### hbs-epo\n\n\n* source group: Serbo-Croatian\n* target group: Esperanto\n* OPUS readme: hbs-epo\n* model: transformer-align\n* source language(s): bos\\_Latn hrv srp\\_Cyrl srp\\_Latn\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.7, chr-F: 0.383",
"### System Info:\n\n\n* hf\\_name: hbs-epo\n* source\\_languages: hbs\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sh', 'eo']\n* src\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hbs\n* tgt\\_alpha3: epo\n* short\\_pair: sh-eo\n* chrF2\\_score: 0.38299999999999995\n* bleu: 18.7\n* brevity\\_penalty: 0.9990000000000001\n* ref\\_len: 18457.0\n* src\\_name: Serbo-Croatian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: sh\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: hbs-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sh #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### hbs-epo\n\n\n* source group: Serbo-Croatian\n* target group: Esperanto\n* OPUS readme: hbs-epo\n* model: transformer-align\n* source language(s): bos\\_Latn hrv srp\\_Cyrl srp\\_Latn\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.7, chr-F: 0.383",
"### System Info:\n\n\n* hf\\_name: hbs-epo\n* source\\_languages: hbs\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sh', 'eo']\n* src\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hbs\n* tgt\\_alpha3: epo\n* short\\_pair: sh-eo\n* chrF2\\_score: 0.38299999999999995\n* bleu: 18.7\n* brevity\\_penalty: 0.9990000000000001\n* ref\\_len: 18457.0\n* src\\_name: Serbo-Croatian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: sh\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: hbs-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
162,
457
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sh #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### hbs-epo\n\n\n* source group: Serbo-Croatian\n* target group: Esperanto\n* OPUS readme: hbs-epo\n* model: transformer-align\n* source language(s): bos\\_Latn hrv srp\\_Cyrl srp\\_Latn\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.7, chr-F: 0.383### System Info:\n\n\n* hf\\_name: hbs-epo\n* source\\_languages: hbs\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sh', 'eo']\n* src\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hbs\n* tgt\\_alpha3: epo\n* short\\_pair: sh-eo\n* chrF2\\_score: 0.38299999999999995\n* bleu: 18.7\n* brevity\\_penalty: 0.9990000000000001\n* ref\\_len: 18457.0\n* src\\_name: Serbo-Croatian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: sh\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: hbs-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### hbs-ukr
* source group: Serbo-Croatian
* target group: Ukrainian
* OPUS readme: [hbs-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-ukr/README.md)
* model: transformer-align
* source language(s): hrv srp_Cyrl srp_Latn
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hbs.ukr | 49.6 | 0.665 |
### System Info:
- hf_name: hbs-ukr
- source_languages: hbs
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sh', 'uk']
- src_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-ukr/opus-2020-06-17.test.txt
- src_alpha3: hbs
- tgt_alpha3: ukr
- short_pair: sh-uk
- chrF2_score: 0.665
- bleu: 49.6
- brevity_penalty: 0.9840000000000001
- ref_len: 4959.0
- src_name: Serbo-Croatian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: sh
- tgt_alpha2: uk
- prefer_old: False
- long_pair: hbs-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["sh", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sh-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sh",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sh",
"uk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sh #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### hbs-ukr
* source group: Serbo-Croatian
* target group: Ukrainian
* OPUS readme: hbs-ukr
* model: transformer-align
* source language(s): hrv srp\_Cyrl srp\_Latn
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.6, chr-F: 0.665
### System Info:
* hf\_name: hbs-ukr
* source\_languages: hbs
* target\_languages: ukr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['sh', 'uk']
* src\_constituents: {'hrv', 'srp\_Cyrl', 'bos\_Latn', 'srp\_Latn'}
* tgt\_constituents: {'ukr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: hbs
* tgt\_alpha3: ukr
* short\_pair: sh-uk
* chrF2\_score: 0.665
* bleu: 49.6
* brevity\_penalty: 0.9840000000000001
* ref\_len: 4959.0
* src\_name: Serbo-Croatian
* tgt\_name: Ukrainian
* train\_date: 2020-06-17
* src\_alpha2: sh
* tgt\_alpha2: uk
* prefer\_old: False
* long\_pair: hbs-ukr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### hbs-ukr\n\n\n* source group: Serbo-Croatian\n* target group: Ukrainian\n* OPUS readme: hbs-ukr\n* model: transformer-align\n* source language(s): hrv srp\\_Cyrl srp\\_Latn\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.6, chr-F: 0.665",
"### System Info:\n\n\n* hf\\_name: hbs-ukr\n* source\\_languages: hbs\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sh', 'uk']\n* src\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hbs\n* tgt\\_alpha3: ukr\n* short\\_pair: sh-uk\n* chrF2\\_score: 0.665\n* bleu: 49.6\n* brevity\\_penalty: 0.9840000000000001\n* ref\\_len: 4959.0\n* src\\_name: Serbo-Croatian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: sh\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: hbs-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sh #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### hbs-ukr\n\n\n* source group: Serbo-Croatian\n* target group: Ukrainian\n* OPUS readme: hbs-ukr\n* model: transformer-align\n* source language(s): hrv srp\\_Cyrl srp\\_Latn\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.6, chr-F: 0.665",
"### System Info:\n\n\n* hf\\_name: hbs-ukr\n* source\\_languages: hbs\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sh', 'uk']\n* src\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hbs\n* tgt\\_alpha3: ukr\n* short\\_pair: sh-uk\n* chrF2\\_score: 0.665\n* bleu: 49.6\n* brevity\\_penalty: 0.9840000000000001\n* ref\\_len: 4959.0\n* src\\_name: Serbo-Croatian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: sh\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: hbs-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
153,
439
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sh #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### hbs-ukr\n\n\n* source group: Serbo-Croatian\n* target group: Ukrainian\n* OPUS readme: hbs-ukr\n* model: transformer-align\n* source language(s): hrv srp\\_Cyrl srp\\_Latn\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.6, chr-F: 0.665### System Info:\n\n\n* hf\\_name: hbs-ukr\n* source\\_languages: hbs\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sh', 'uk']\n* src\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hbs\n* tgt\\_alpha3: ukr\n* short\\_pair: sh-uk\n* chrF2\\_score: 0.665\n* bleu: 49.6\n* brevity\\_penalty: 0.9840000000000001\n* ref\\_len: 4959.0\n* src\\_name: Serbo-Croatian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: sh\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: hbs-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-sk-en
* source languages: sk
* target languages: en
* OPUS readme: [sk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.en | 42.2 | 0.612 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sk-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sk",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sk-en
* source languages: sk
* target languages: en
* OPUS readme: sk-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 42.2, chr-F: 0.612
| [
"### opus-mt-sk-en\n\n\n* source languages: sk\n* target languages: en\n* OPUS readme: sk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.2, chr-F: 0.612"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sk-en\n\n\n* source languages: sk\n* target languages: en\n* OPUS readme: sk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.2, chr-F: 0.612"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sk-en\n\n\n* source languages: sk\n* target languages: en\n* OPUS readme: sk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.2, chr-F: 0.612"
] |
translation | transformers |
### opus-mt-sk-es
* source languages: sk
* target languages: es
* OPUS readme: [sk-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.es | 29.6 | 0.505 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sk-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sk",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sk #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sk-es
* source languages: sk
* target languages: es
* OPUS readme: sk-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.6, chr-F: 0.505
| [
"### opus-mt-sk-es\n\n\n* source languages: sk\n* target languages: es\n* OPUS readme: sk-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.6, chr-F: 0.505"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sk-es\n\n\n* source languages: sk\n* target languages: es\n* OPUS readme: sk-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.6, chr-F: 0.505"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sk-es\n\n\n* source languages: sk\n* target languages: es\n* OPUS readme: sk-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.6, chr-F: 0.505"
] |
translation | transformers |
### opus-mt-sk-fi
* source languages: sk
* target languages: fi
* OPUS readme: [sk-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.fi | 27.6 | 0.544 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sk-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sk",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sk #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sk-fi
* source languages: sk
* target languages: fi
* OPUS readme: sk-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.6, chr-F: 0.544
| [
"### opus-mt-sk-fi\n\n\n* source languages: sk\n* target languages: fi\n* OPUS readme: sk-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.544"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sk-fi\n\n\n* source languages: sk\n* target languages: fi\n* OPUS readme: sk-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.544"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sk-fi\n\n\n* source languages: sk\n* target languages: fi\n* OPUS readme: sk-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.544"
] |
translation | transformers |
### opus-mt-sk-fr
* source languages: sk
* target languages: fr
* OPUS readme: [sk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.fr | 29.4 | 0.508 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sk-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sk",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sk #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sk-fr
* source languages: sk
* target languages: fr
* OPUS readme: sk-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.4, chr-F: 0.508
| [
"### opus-mt-sk-fr\n\n\n* source languages: sk\n* target languages: fr\n* OPUS readme: sk-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.4, chr-F: 0.508"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sk-fr\n\n\n* source languages: sk\n* target languages: fr\n* OPUS readme: sk-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.4, chr-F: 0.508"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sk-fr\n\n\n* source languages: sk\n* target languages: fr\n* OPUS readme: sk-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.4, chr-F: 0.508"
] |
translation | transformers |
### opus-mt-sk-sv
* source languages: sk
* target languages: sv
* OPUS readme: [sk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sk-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sk-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sk-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sk.sv | 33.1 | 0.544 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sk-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sk",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sk #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sk-sv
* source languages: sk
* target languages: sv
* OPUS readme: sk-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.1, chr-F: 0.544
| [
"### opus-mt-sk-sv\n\n\n* source languages: sk\n* target languages: sv\n* OPUS readme: sk-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.544"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sk-sv\n\n\n* source languages: sk\n* target languages: sv\n* OPUS readme: sk-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.544"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sk #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sk-sv\n\n\n* source languages: sk\n* target languages: sv\n* OPUS readme: sk-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.544"
] |
translation | transformers |
### opus-mt-sl-es
* source languages: sl
* target languages: es
* OPUS readme: [sl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sl-es/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-es/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-es/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sl.es | 26.3 | 0.483 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sl-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sl-es
* source languages: sl
* target languages: es
* OPUS readme: sl-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.3, chr-F: 0.483
| [
"### opus-mt-sl-es\n\n\n* source languages: sl\n* target languages: es\n* OPUS readme: sl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.483"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sl-es\n\n\n* source languages: sl\n* target languages: es\n* OPUS readme: sl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.483"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sl-es\n\n\n* source languages: sl\n* target languages: es\n* OPUS readme: sl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.483"
] |
translation | transformers |
### opus-mt-sl-fi
* source languages: sl
* target languages: fi
* OPUS readme: [sl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sl-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sl.fi | 23.4 | 0.517 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sl-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sl",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sl-fi
* source languages: sl
* target languages: fi
* OPUS readme: sl-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.4, chr-F: 0.517
| [
"### opus-mt-sl-fi\n\n\n* source languages: sl\n* target languages: fi\n* OPUS readme: sl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.4, chr-F: 0.517"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sl-fi\n\n\n* source languages: sl\n* target languages: fi\n* OPUS readme: sl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.4, chr-F: 0.517"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sl-fi\n\n\n* source languages: sl\n* target languages: fi\n* OPUS readme: sl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.4, chr-F: 0.517"
] |
translation | transformers |
### opus-mt-sl-fr
* source languages: sl
* target languages: fr
* OPUS readme: [sl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sl-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sl.fr | 25.0 | 0.475 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sl-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sl",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sl-fr
* source languages: sl
* target languages: fr
* OPUS readme: sl-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.0, chr-F: 0.475
| [
"### opus-mt-sl-fr\n\n\n* source languages: sl\n* target languages: fr\n* OPUS readme: sl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.475"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sl-fr\n\n\n* source languages: sl\n* target languages: fr\n* OPUS readme: sl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.475"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sl-fr\n\n\n* source languages: sl\n* target languages: fr\n* OPUS readme: sl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.475"
] |
translation | transformers |
### slv-rus
* source group: Slovenian
* target group: Russian
* OPUS readme: [slv-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-rus/README.md)
* model: transformer-align
* source language(s): slv
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.slv.rus | 37.3 | 0.504 |
### System Info:
- hf_name: slv-rus
- source_languages: slv
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sl', 'ru']
- src_constituents: {'slv'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-rus/opus-2020-06-17.test.txt
- src_alpha3: slv
- tgt_alpha3: rus
- short_pair: sl-ru
- chrF2_score: 0.504
- bleu: 37.3
- brevity_penalty: 0.988
- ref_len: 2101.0
- src_name: Slovenian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: sl
- tgt_alpha2: ru
- prefer_old: False
- long_pair: slv-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["sl", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sl-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sl",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sl",
"ru"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sl #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### slv-rus
* source group: Slovenian
* target group: Russian
* OPUS readme: slv-rus
* model: transformer-align
* source language(s): slv
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.3, chr-F: 0.504
### System Info:
* hf\_name: slv-rus
* source\_languages: slv
* target\_languages: rus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['sl', 'ru']
* src\_constituents: {'slv'}
* tgt\_constituents: {'rus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: slv
* tgt\_alpha3: rus
* short\_pair: sl-ru
* chrF2\_score: 0.504
* bleu: 37.3
* brevity\_penalty: 0.988
* ref\_len: 2101.0
* src\_name: Slovenian
* tgt\_name: Russian
* train\_date: 2020-06-17
* src\_alpha2: sl
* tgt\_alpha2: ru
* prefer\_old: False
* long\_pair: slv-rus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### slv-rus\n\n\n* source group: Slovenian\n* target group: Russian\n* OPUS readme: slv-rus\n* model: transformer-align\n* source language(s): slv\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.504",
"### System Info:\n\n\n* hf\\_name: slv-rus\n* source\\_languages: slv\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sl', 'ru']\n* src\\_constituents: {'slv'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: slv\n* tgt\\_alpha3: rus\n* short\\_pair: sl-ru\n* chrF2\\_score: 0.504\n* bleu: 37.3\n* brevity\\_penalty: 0.988\n* ref\\_len: 2101.0\n* src\\_name: Slovenian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: sl\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: slv-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### slv-rus\n\n\n* source group: Slovenian\n* target group: Russian\n* OPUS readme: slv-rus\n* model: transformer-align\n* source language(s): slv\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.504",
"### System Info:\n\n\n* hf\\_name: slv-rus\n* source\\_languages: slv\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sl', 'ru']\n* src\\_constituents: {'slv'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: slv\n* tgt\\_alpha3: rus\n* short\\_pair: sl-ru\n* chrF2\\_score: 0.504\n* bleu: 37.3\n* brevity\\_penalty: 0.988\n* ref\\_len: 2101.0\n* src\\_name: Slovenian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: sl\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: slv-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
396
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### slv-rus\n\n\n* source group: Slovenian\n* target group: Russian\n* OPUS readme: slv-rus\n* model: transformer-align\n* source language(s): slv\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.504### System Info:\n\n\n* hf\\_name: slv-rus\n* source\\_languages: slv\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sl', 'ru']\n* src\\_constituents: {'slv'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: slv\n* tgt\\_alpha3: rus\n* short\\_pair: sl-ru\n* chrF2\\_score: 0.504\n* bleu: 37.3\n* brevity\\_penalty: 0.988\n* ref\\_len: 2101.0\n* src\\_name: Slovenian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: sl\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: slv-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-sl-sv
* source languages: sl
* target languages: sv
* OPUS readme: [sl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sl-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sl-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sl.sv | 27.8 | 0.509 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sl-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sl",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sl-sv
* source languages: sl
* target languages: sv
* OPUS readme: sl-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.8, chr-F: 0.509
| [
"### opus-mt-sl-sv\n\n\n* source languages: sl\n* target languages: sv\n* OPUS readme: sl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.509"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sl-sv\n\n\n* source languages: sl\n* target languages: sv\n* OPUS readme: sl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.509"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sl-sv\n\n\n* source languages: sl\n* target languages: sv\n* OPUS readme: sl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.509"
] |
translation | transformers |
### slv-ukr
* source group: Slovenian
* target group: Ukrainian
* OPUS readme: [slv-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-ukr/README.md)
* model: transformer-align
* source language(s): slv
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.slv.ukr | 10.6 | 0.236 |
### System Info:
- hf_name: slv-ukr
- source_languages: slv
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/slv-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sl', 'uk']
- src_constituents: {'slv'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/slv-ukr/opus-2020-06-17.test.txt
- src_alpha3: slv
- tgt_alpha3: ukr
- short_pair: sl-uk
- chrF2_score: 0.23600000000000002
- bleu: 10.6
- brevity_penalty: 1.0
- ref_len: 3906.0
- src_name: Slovenian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: sl
- tgt_alpha2: uk
- prefer_old: False
- long_pair: slv-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["sl", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sl-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sl",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sl",
"uk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sl #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### slv-ukr
* source group: Slovenian
* target group: Ukrainian
* OPUS readme: slv-ukr
* model: transformer-align
* source language(s): slv
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 10.6, chr-F: 0.236
### System Info:
* hf\_name: slv-ukr
* source\_languages: slv
* target\_languages: ukr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['sl', 'uk']
* src\_constituents: {'slv'}
* tgt\_constituents: {'ukr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: slv
* tgt\_alpha3: ukr
* short\_pair: sl-uk
* chrF2\_score: 0.23600000000000002
* bleu: 10.6
* brevity\_penalty: 1.0
* ref\_len: 3906.0
* src\_name: Slovenian
* tgt\_name: Ukrainian
* train\_date: 2020-06-17
* src\_alpha2: sl
* tgt\_alpha2: uk
* prefer\_old: False
* long\_pair: slv-ukr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### slv-ukr\n\n\n* source group: Slovenian\n* target group: Ukrainian\n* OPUS readme: slv-ukr\n* model: transformer-align\n* source language(s): slv\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.6, chr-F: 0.236",
"### System Info:\n\n\n* hf\\_name: slv-ukr\n* source\\_languages: slv\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sl', 'uk']\n* src\\_constituents: {'slv'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: slv\n* tgt\\_alpha3: ukr\n* short\\_pair: sl-uk\n* chrF2\\_score: 0.23600000000000002\n* bleu: 10.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 3906.0\n* src\\_name: Slovenian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: sl\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: slv-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### slv-ukr\n\n\n* source group: Slovenian\n* target group: Ukrainian\n* OPUS readme: slv-ukr\n* model: transformer-align\n* source language(s): slv\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.6, chr-F: 0.236",
"### System Info:\n\n\n* hf\\_name: slv-ukr\n* source\\_languages: slv\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sl', 'uk']\n* src\\_constituents: {'slv'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: slv\n* tgt\\_alpha3: ukr\n* short\\_pair: sl-uk\n* chrF2\\_score: 0.23600000000000002\n* bleu: 10.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 3906.0\n* src\\_name: Slovenian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: sl\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: slv-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
136,
407
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sl #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### slv-ukr\n\n\n* source group: Slovenian\n* target group: Ukrainian\n* OPUS readme: slv-ukr\n* model: transformer-align\n* source language(s): slv\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.6, chr-F: 0.236### System Info:\n\n\n* hf\\_name: slv-ukr\n* source\\_languages: slv\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sl', 'uk']\n* src\\_constituents: {'slv'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: slv\n* tgt\\_alpha3: ukr\n* short\\_pair: sl-uk\n* chrF2\\_score: 0.23600000000000002\n* bleu: 10.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 3906.0\n* src\\_name: Slovenian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: sl\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: slv-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### sla-eng
* source group: Slavic languages
* target group: English
* OPUS readme: [sla-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-eng/README.md)
* model: transformer
* source language(s): bel bel_Latn bos_Latn bul bul_Latn ces csb_Latn dsb hrv hsb mkd orv_Cyrl pol rue rus slv srp_Cyrl srp_Latn ukr
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-ceseng.ces.eng | 26.7 | 0.542 |
| newstest2009-ceseng.ces.eng | 25.2 | 0.534 |
| newstest2010-ceseng.ces.eng | 25.9 | 0.545 |
| newstest2011-ceseng.ces.eng | 26.8 | 0.544 |
| newstest2012-ceseng.ces.eng | 25.6 | 0.536 |
| newstest2012-ruseng.rus.eng | 32.5 | 0.588 |
| newstest2013-ceseng.ces.eng | 28.8 | 0.556 |
| newstest2013-ruseng.rus.eng | 26.4 | 0.532 |
| newstest2014-csen-ceseng.ces.eng | 31.4 | 0.591 |
| newstest2014-ruen-ruseng.rus.eng | 29.6 | 0.576 |
| newstest2015-encs-ceseng.ces.eng | 28.2 | 0.545 |
| newstest2015-enru-ruseng.rus.eng | 28.1 | 0.551 |
| newstest2016-encs-ceseng.ces.eng | 30.0 | 0.567 |
| newstest2016-enru-ruseng.rus.eng | 27.4 | 0.548 |
| newstest2017-encs-ceseng.ces.eng | 26.5 | 0.537 |
| newstest2017-enru-ruseng.rus.eng | 31.0 | 0.574 |
| newstest2018-encs-ceseng.ces.eng | 27.9 | 0.548 |
| newstest2018-enru-ruseng.rus.eng | 26.8 | 0.545 |
| newstest2019-ruen-ruseng.rus.eng | 29.1 | 0.562 |
| Tatoeba-test.bel-eng.bel.eng | 42.5 | 0.609 |
| Tatoeba-test.bul-eng.bul.eng | 55.4 | 0.697 |
| Tatoeba-test.ces-eng.ces.eng | 53.1 | 0.688 |
| Tatoeba-test.csb-eng.csb.eng | 23.1 | 0.446 |
| Tatoeba-test.dsb-eng.dsb.eng | 31.1 | 0.467 |
| Tatoeba-test.hbs-eng.hbs.eng | 56.1 | 0.702 |
| Tatoeba-test.hsb-eng.hsb.eng | 46.2 | 0.597 |
| Tatoeba-test.mkd-eng.mkd.eng | 54.5 | 0.680 |
| Tatoeba-test.multi.eng | 53.2 | 0.683 |
| Tatoeba-test.orv-eng.orv.eng | 12.1 | 0.292 |
| Tatoeba-test.pol-eng.pol.eng | 51.1 | 0.671 |
| Tatoeba-test.rue-eng.rue.eng | 19.6 | 0.389 |
| Tatoeba-test.rus-eng.rus.eng | 54.1 | 0.686 |
| Tatoeba-test.slv-eng.slv.eng | 43.4 | 0.610 |
| Tatoeba-test.ukr-eng.ukr.eng | 53.8 | 0.685 |
### System Info:
- hf_name: sla-eng
- source_languages: sla
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla', 'en']
- src_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.test.txt
- src_alpha3: sla
- tgt_alpha3: eng
- short_pair: sla-en
- chrF2_score: 0.6829999999999999
- bleu: 53.2
- brevity_penalty: 0.9740000000000001
- ref_len: 70897.0
- src_name: Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: sla
- tgt_alpha2: en
- prefer_old: False
- long_pair: sla-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["be", "hr", "mk", "cs", "ru", "pl", "bg", "uk", "sl", "sla", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sla-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"be",
"hr",
"mk",
"cs",
"ru",
"pl",
"bg",
"uk",
"sl",
"sla",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"be",
"hr",
"mk",
"cs",
"ru",
"pl",
"bg",
"uk",
"sl",
"sla",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #be #hr #mk #cs #ru #pl #bg #uk #sl #sla #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### sla-eng
* source group: Slavic languages
* target group: English
* OPUS readme: sla-eng
* model: transformer
* source language(s): bel bel\_Latn bos\_Latn bul bul\_Latn ces csb\_Latn dsb hrv hsb mkd orv\_Cyrl pol rue rus slv srp\_Cyrl srp\_Latn ukr
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.7, chr-F: 0.542
testset: URL, BLEU: 25.2, chr-F: 0.534
testset: URL, BLEU: 25.9, chr-F: 0.545
testset: URL, BLEU: 26.8, chr-F: 0.544
testset: URL, BLEU: 25.6, chr-F: 0.536
testset: URL, BLEU: 32.5, chr-F: 0.588
testset: URL, BLEU: 28.8, chr-F: 0.556
testset: URL, BLEU: 26.4, chr-F: 0.532
testset: URL, BLEU: 31.4, chr-F: 0.591
testset: URL, BLEU: 29.6, chr-F: 0.576
testset: URL, BLEU: 28.2, chr-F: 0.545
testset: URL, BLEU: 28.1, chr-F: 0.551
testset: URL, BLEU: 30.0, chr-F: 0.567
testset: URL, BLEU: 27.4, chr-F: 0.548
testset: URL, BLEU: 26.5, chr-F: 0.537
testset: URL, BLEU: 31.0, chr-F: 0.574
testset: URL, BLEU: 27.9, chr-F: 0.548
testset: URL, BLEU: 26.8, chr-F: 0.545
testset: URL, BLEU: 29.1, chr-F: 0.562
testset: URL, BLEU: 42.5, chr-F: 0.609
testset: URL, BLEU: 55.4, chr-F: 0.697
testset: URL, BLEU: 53.1, chr-F: 0.688
testset: URL, BLEU: 23.1, chr-F: 0.446
testset: URL, BLEU: 31.1, chr-F: 0.467
testset: URL, BLEU: 56.1, chr-F: 0.702
testset: URL, BLEU: 46.2, chr-F: 0.597
testset: URL, BLEU: 54.5, chr-F: 0.680
testset: URL, BLEU: 53.2, chr-F: 0.683
testset: URL, BLEU: 12.1, chr-F: 0.292
testset: URL, BLEU: 51.1, chr-F: 0.671
testset: URL, BLEU: 19.6, chr-F: 0.389
testset: URL, BLEU: 54.1, chr-F: 0.686
testset: URL, BLEU: 43.4, chr-F: 0.610
testset: URL, BLEU: 53.8, chr-F: 0.685
### System Info:
* hf\_name: sla-eng
* source\_languages: sla
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla', 'en']
* src\_constituents: {'bel', 'hrv', 'orv\_Cyrl', 'mkd', 'bel\_Latn', 'srp\_Latn', 'bul\_Latn', 'ces', 'bos\_Latn', 'csb\_Latn', 'dsb', 'hsb', 'rus', 'srp\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: sla
* tgt\_alpha3: eng
* short\_pair: sla-en
* chrF2\_score: 0.6829999999999999
* bleu: 53.2
* brevity\_penalty: 0.9740000000000001
* ref\_len: 70897.0
* src\_name: Slavic languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: sla
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: sla-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### sla-eng\n\n\n* source group: Slavic languages\n* target group: English\n* OPUS readme: sla-eng\n* model: transformer\n* source language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces csb\\_Latn dsb hrv hsb mkd orv\\_Cyrl pol rue rus slv srp\\_Cyrl srp\\_Latn ukr\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.542\ntestset: URL, BLEU: 25.2, chr-F: 0.534\ntestset: URL, BLEU: 25.9, chr-F: 0.545\ntestset: URL, BLEU: 26.8, chr-F: 0.544\ntestset: URL, BLEU: 25.6, chr-F: 0.536\ntestset: URL, BLEU: 32.5, chr-F: 0.588\ntestset: URL, BLEU: 28.8, chr-F: 0.556\ntestset: URL, BLEU: 26.4, chr-F: 0.532\ntestset: URL, BLEU: 31.4, chr-F: 0.591\ntestset: URL, BLEU: 29.6, chr-F: 0.576\ntestset: URL, BLEU: 28.2, chr-F: 0.545\ntestset: URL, BLEU: 28.1, chr-F: 0.551\ntestset: URL, BLEU: 30.0, chr-F: 0.567\ntestset: URL, BLEU: 27.4, chr-F: 0.548\ntestset: URL, BLEU: 26.5, chr-F: 0.537\ntestset: URL, BLEU: 31.0, chr-F: 0.574\ntestset: URL, BLEU: 27.9, chr-F: 0.548\ntestset: URL, BLEU: 26.8, chr-F: 0.545\ntestset: URL, BLEU: 29.1, chr-F: 0.562\ntestset: URL, BLEU: 42.5, chr-F: 0.609\ntestset: URL, BLEU: 55.4, chr-F: 0.697\ntestset: URL, BLEU: 53.1, chr-F: 0.688\ntestset: URL, BLEU: 23.1, chr-F: 0.446\ntestset: URL, BLEU: 31.1, chr-F: 0.467\ntestset: URL, BLEU: 56.1, chr-F: 0.702\ntestset: URL, BLEU: 46.2, chr-F: 0.597\ntestset: URL, BLEU: 54.5, chr-F: 0.680\ntestset: URL, BLEU: 53.2, chr-F: 0.683\ntestset: URL, BLEU: 12.1, chr-F: 0.292\ntestset: URL, BLEU: 51.1, chr-F: 0.671\ntestset: URL, BLEU: 19.6, chr-F: 0.389\ntestset: URL, BLEU: 54.1, chr-F: 0.686\ntestset: URL, BLEU: 43.4, chr-F: 0.610\ntestset: URL, BLEU: 53.8, chr-F: 0.685",
"### System Info:\n\n\n* hf\\_name: sla-eng\n* source\\_languages: sla\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla', 'en']\n* src\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sla\n* tgt\\_alpha3: eng\n* short\\_pair: sla-en\n* chrF2\\_score: 0.6829999999999999\n* bleu: 53.2\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 70897.0\n* src\\_name: Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: sla\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: sla-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #be #hr #mk #cs #ru #pl #bg #uk #sl #sla #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### sla-eng\n\n\n* source group: Slavic languages\n* target group: English\n* OPUS readme: sla-eng\n* model: transformer\n* source language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces csb\\_Latn dsb hrv hsb mkd orv\\_Cyrl pol rue rus slv srp\\_Cyrl srp\\_Latn ukr\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.542\ntestset: URL, BLEU: 25.2, chr-F: 0.534\ntestset: URL, BLEU: 25.9, chr-F: 0.545\ntestset: URL, BLEU: 26.8, chr-F: 0.544\ntestset: URL, BLEU: 25.6, chr-F: 0.536\ntestset: URL, BLEU: 32.5, chr-F: 0.588\ntestset: URL, BLEU: 28.8, chr-F: 0.556\ntestset: URL, BLEU: 26.4, chr-F: 0.532\ntestset: URL, BLEU: 31.4, chr-F: 0.591\ntestset: URL, BLEU: 29.6, chr-F: 0.576\ntestset: URL, BLEU: 28.2, chr-F: 0.545\ntestset: URL, BLEU: 28.1, chr-F: 0.551\ntestset: URL, BLEU: 30.0, chr-F: 0.567\ntestset: URL, BLEU: 27.4, chr-F: 0.548\ntestset: URL, BLEU: 26.5, chr-F: 0.537\ntestset: URL, BLEU: 31.0, chr-F: 0.574\ntestset: URL, BLEU: 27.9, chr-F: 0.548\ntestset: URL, BLEU: 26.8, chr-F: 0.545\ntestset: URL, BLEU: 29.1, chr-F: 0.562\ntestset: URL, BLEU: 42.5, chr-F: 0.609\ntestset: URL, BLEU: 55.4, chr-F: 0.697\ntestset: URL, BLEU: 53.1, chr-F: 0.688\ntestset: URL, BLEU: 23.1, chr-F: 0.446\ntestset: URL, BLEU: 31.1, chr-F: 0.467\ntestset: URL, BLEU: 56.1, chr-F: 0.702\ntestset: URL, BLEU: 46.2, chr-F: 0.597\ntestset: URL, BLEU: 54.5, chr-F: 0.680\ntestset: URL, BLEU: 53.2, chr-F: 0.683\ntestset: URL, BLEU: 12.1, chr-F: 0.292\ntestset: URL, BLEU: 51.1, chr-F: 0.671\ntestset: URL, BLEU: 19.6, chr-F: 0.389\ntestset: URL, BLEU: 54.1, chr-F: 0.686\ntestset: URL, BLEU: 43.4, chr-F: 0.610\ntestset: URL, BLEU: 53.8, chr-F: 0.685",
"### System Info:\n\n\n* hf\\_name: sla-eng\n* source\\_languages: sla\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla', 'en']\n* src\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sla\n* tgt\\_alpha3: eng\n* short\\_pair: sla-en\n* chrF2\\_score: 0.6829999999999999\n* bleu: 53.2\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 70897.0\n* src\\_name: Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: sla\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: sla-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
71,
951,
575
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #be #hr #mk #cs #ru #pl #bg #uk #sl #sla #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### sla-eng\n\n\n* source group: Slavic languages\n* target group: English\n* OPUS readme: sla-eng\n* model: transformer\n* source language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces csb\\_Latn dsb hrv hsb mkd orv\\_Cyrl pol rue rus slv srp\\_Cyrl srp\\_Latn ukr\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.542\ntestset: URL, BLEU: 25.2, chr-F: 0.534\ntestset: URL, BLEU: 25.9, chr-F: 0.545\ntestset: URL, BLEU: 26.8, chr-F: 0.544\ntestset: URL, BLEU: 25.6, chr-F: 0.536\ntestset: URL, BLEU: 32.5, chr-F: 0.588\ntestset: URL, BLEU: 28.8, chr-F: 0.556\ntestset: URL, BLEU: 26.4, chr-F: 0.532\ntestset: URL, BLEU: 31.4, chr-F: 0.591\ntestset: URL, BLEU: 29.6, chr-F: 0.576\ntestset: URL, BLEU: 28.2, chr-F: 0.545\ntestset: URL, BLEU: 28.1, chr-F: 0.551\ntestset: URL, BLEU: 30.0, chr-F: 0.567\ntestset: URL, BLEU: 27.4, chr-F: 0.548\ntestset: URL, BLEU: 26.5, chr-F: 0.537\ntestset: URL, BLEU: 31.0, chr-F: 0.574\ntestset: URL, BLEU: 27.9, chr-F: 0.548\ntestset: URL, BLEU: 26.8, chr-F: 0.545\ntestset: URL, BLEU: 29.1, chr-F: 0.562\ntestset: URL, BLEU: 42.5, chr-F: 0.609\ntestset: URL, BLEU: 55.4, chr-F: 0.697\ntestset: URL, BLEU: 53.1, chr-F: 0.688\ntestset: URL, BLEU: 23.1, chr-F: 0.446\ntestset: URL, BLEU: 31.1, chr-F: 0.467\ntestset: URL, BLEU: 56.1, chr-F: 0.702\ntestset: URL, BLEU: 46.2, chr-F: 0.597\ntestset: URL, BLEU: 54.5, chr-F: 0.680\ntestset: URL, BLEU: 53.2, chr-F: 0.683\ntestset: URL, BLEU: 12.1, chr-F: 0.292\ntestset: URL, BLEU: 51.1, chr-F: 0.671\ntestset: URL, BLEU: 19.6, chr-F: 0.389\ntestset: URL, BLEU: 54.1, chr-F: 0.686\ntestset: URL, BLEU: 43.4, chr-F: 0.610\ntestset: URL, BLEU: 53.8, chr-F: 0.685### System Info:\n\n\n* hf\\_name: sla-eng\n* source\\_languages: sla\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla', 'en']\n* src\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sla\n* tgt\\_alpha3: eng\n* short\\_pair: sla-en\n* chrF2\\_score: 0.6829999999999999\n* bleu: 53.2\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 70897.0\n* src\\_name: Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: sla\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: sla-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### sla-sla
* source group: Slavic languages
* target group: Slavic languages
* OPUS readme: [sla-sla](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-sla/README.md)
* model: transformer
* source language(s): bel bel_Latn bos_Latn bul bul_Latn ces dsb hrv hsb mkd orv_Cyrl pol rus slv srp_Cyrl srp_Latn ukr
* target language(s): bel bel_Latn bos_Latn bul bul_Latn ces dsb hrv hsb mkd orv_Cyrl pol rus slv srp_Cyrl srp_Latn ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-cesrus.ces.rus | 15.9 | 0.437 |
| newstest2012-rusces.rus.ces | 13.6 | 0.403 |
| newstest2013-cesrus.ces.rus | 19.8 | 0.473 |
| newstest2013-rusces.rus.ces | 17.9 | 0.449 |
| Tatoeba-test.bel-bul.bel.bul | 100.0 | 1.000 |
| Tatoeba-test.bel-ces.bel.ces | 33.5 | 0.630 |
| Tatoeba-test.bel-hbs.bel.hbs | 45.4 | 0.644 |
| Tatoeba-test.bel-mkd.bel.mkd | 19.3 | 0.531 |
| Tatoeba-test.bel-pol.bel.pol | 46.9 | 0.681 |
| Tatoeba-test.bel-rus.bel.rus | 58.5 | 0.767 |
| Tatoeba-test.bel-ukr.bel.ukr | 55.1 | 0.743 |
| Tatoeba-test.bul-bel.bul.bel | 10.7 | 0.423 |
| Tatoeba-test.bul-ces.bul.ces | 36.9 | 0.585 |
| Tatoeba-test.bul-hbs.bul.hbs | 53.7 | 0.807 |
| Tatoeba-test.bul-mkd.bul.mkd | 31.9 | 0.715 |
| Tatoeba-test.bul-pol.bul.pol | 38.6 | 0.607 |
| Tatoeba-test.bul-rus.bul.rus | 44.8 | 0.655 |
| Tatoeba-test.bul-ukr.bul.ukr | 49.9 | 0.691 |
| Tatoeba-test.ces-bel.ces.bel | 30.9 | 0.585 |
| Tatoeba-test.ces-bul.ces.bul | 75.8 | 0.859 |
| Tatoeba-test.ces-hbs.ces.hbs | 50.0 | 0.661 |
| Tatoeba-test.ces-hsb.ces.hsb | 7.9 | 0.246 |
| Tatoeba-test.ces-mkd.ces.mkd | 24.6 | 0.569 |
| Tatoeba-test.ces-pol.ces.pol | 44.3 | 0.652 |
| Tatoeba-test.ces-rus.ces.rus | 50.8 | 0.690 |
| Tatoeba-test.ces-slv.ces.slv | 4.9 | 0.240 |
| Tatoeba-test.ces-ukr.ces.ukr | 52.9 | 0.687 |
| Tatoeba-test.dsb-pol.dsb.pol | 16.3 | 0.367 |
| Tatoeba-test.dsb-rus.dsb.rus | 12.7 | 0.245 |
| Tatoeba-test.hbs-bel.hbs.bel | 32.9 | 0.531 |
| Tatoeba-test.hbs-bul.hbs.bul | 100.0 | 1.000 |
| Tatoeba-test.hbs-ces.hbs.ces | 40.3 | 0.626 |
| Tatoeba-test.hbs-mkd.hbs.mkd | 19.3 | 0.535 |
| Tatoeba-test.hbs-pol.hbs.pol | 45.0 | 0.650 |
| Tatoeba-test.hbs-rus.hbs.rus | 53.5 | 0.709 |
| Tatoeba-test.hbs-ukr.hbs.ukr | 50.7 | 0.684 |
| Tatoeba-test.hsb-ces.hsb.ces | 17.9 | 0.366 |
| Tatoeba-test.mkd-bel.mkd.bel | 23.6 | 0.548 |
| Tatoeba-test.mkd-bul.mkd.bul | 54.2 | 0.833 |
| Tatoeba-test.mkd-ces.mkd.ces | 12.1 | 0.371 |
| Tatoeba-test.mkd-hbs.mkd.hbs | 19.3 | 0.577 |
| Tatoeba-test.mkd-pol.mkd.pol | 53.7 | 0.833 |
| Tatoeba-test.mkd-rus.mkd.rus | 34.2 | 0.745 |
| Tatoeba-test.mkd-ukr.mkd.ukr | 42.7 | 0.708 |
| Tatoeba-test.multi.multi | 48.5 | 0.672 |
| Tatoeba-test.orv-pol.orv.pol | 10.1 | 0.355 |
| Tatoeba-test.orv-rus.orv.rus | 10.6 | 0.275 |
| Tatoeba-test.orv-ukr.orv.ukr | 7.5 | 0.230 |
| Tatoeba-test.pol-bel.pol.bel | 29.8 | 0.533 |
| Tatoeba-test.pol-bul.pol.bul | 36.8 | 0.578 |
| Tatoeba-test.pol-ces.pol.ces | 43.6 | 0.626 |
| Tatoeba-test.pol-dsb.pol.dsb | 0.9 | 0.097 |
| Tatoeba-test.pol-hbs.pol.hbs | 42.4 | 0.644 |
| Tatoeba-test.pol-mkd.pol.mkd | 19.3 | 0.535 |
| Tatoeba-test.pol-orv.pol.orv | 0.7 | 0.109 |
| Tatoeba-test.pol-rus.pol.rus | 49.6 | 0.680 |
| Tatoeba-test.pol-slv.pol.slv | 7.3 | 0.262 |
| Tatoeba-test.pol-ukr.pol.ukr | 46.8 | 0.664 |
| Tatoeba-test.rus-bel.rus.bel | 34.4 | 0.577 |
| Tatoeba-test.rus-bul.rus.bul | 45.5 | 0.657 |
| Tatoeba-test.rus-ces.rus.ces | 48.0 | 0.659 |
| Tatoeba-test.rus-dsb.rus.dsb | 10.7 | 0.029 |
| Tatoeba-test.rus-hbs.rus.hbs | 44.6 | 0.655 |
| Tatoeba-test.rus-mkd.rus.mkd | 34.9 | 0.617 |
| Tatoeba-test.rus-orv.rus.orv | 0.1 | 0.073 |
| Tatoeba-test.rus-pol.rus.pol | 45.2 | 0.659 |
| Tatoeba-test.rus-slv.rus.slv | 30.4 | 0.476 |
| Tatoeba-test.rus-ukr.rus.ukr | 57.6 | 0.751 |
| Tatoeba-test.slv-ces.slv.ces | 42.5 | 0.604 |
| Tatoeba-test.slv-pol.slv.pol | 39.6 | 0.601 |
| Tatoeba-test.slv-rus.slv.rus | 47.2 | 0.638 |
| Tatoeba-test.slv-ukr.slv.ukr | 36.4 | 0.549 |
| Tatoeba-test.ukr-bel.ukr.bel | 36.9 | 0.597 |
| Tatoeba-test.ukr-bul.ukr.bul | 56.4 | 0.733 |
| Tatoeba-test.ukr-ces.ukr.ces | 52.1 | 0.686 |
| Tatoeba-test.ukr-hbs.ukr.hbs | 47.1 | 0.670 |
| Tatoeba-test.ukr-mkd.ukr.mkd | 20.8 | 0.548 |
| Tatoeba-test.ukr-orv.ukr.orv | 0.2 | 0.058 |
| Tatoeba-test.ukr-pol.ukr.pol | 50.1 | 0.695 |
| Tatoeba-test.ukr-rus.ukr.rus | 63.9 | 0.790 |
| Tatoeba-test.ukr-slv.ukr.slv | 14.5 | 0.288 |
### System Info:
- hf_name: sla-sla
- source_languages: sla
- target_languages: sla
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-sla/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']
- src_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- tgt_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.test.txt
- src_alpha3: sla
- tgt_alpha3: sla
- short_pair: sla-sla
- chrF2_score: 0.672
- bleu: 48.5
- brevity_penalty: 1.0
- ref_len: 59320.0
- src_name: Slavic languages
- tgt_name: Slavic languages
- train_date: 2020-07-27
- src_alpha2: sla
- tgt_alpha2: sla
- prefer_old: False
- long_pair: sla-sla
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["be", "hr", "mk", "cs", "ru", "pl", "bg", "uk", "sl", "sla"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sla-sla | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"be",
"hr",
"mk",
"cs",
"ru",
"pl",
"bg",
"uk",
"sl",
"sla",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"be",
"hr",
"mk",
"cs",
"ru",
"pl",
"bg",
"uk",
"sl",
"sla"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #be #hr #mk #cs #ru #pl #bg #uk #sl #sla #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### sla-sla
* source group: Slavic languages
* target group: Slavic languages
* OPUS readme: sla-sla
* model: transformer
* source language(s): bel bel\_Latn bos\_Latn bul bul\_Latn ces dsb hrv hsb mkd orv\_Cyrl pol rus slv srp\_Cyrl srp\_Latn ukr
* target language(s): bel bel\_Latn bos\_Latn bul bul\_Latn ces dsb hrv hsb mkd orv\_Cyrl pol rus slv srp\_Cyrl srp\_Latn ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 15.9, chr-F: 0.437
testset: URL, BLEU: 13.6, chr-F: 0.403
testset: URL, BLEU: 19.8, chr-F: 0.473
testset: URL, BLEU: 17.9, chr-F: 0.449
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 33.5, chr-F: 0.630
testset: URL, BLEU: 45.4, chr-F: 0.644
testset: URL, BLEU: 19.3, chr-F: 0.531
testset: URL, BLEU: 46.9, chr-F: 0.681
testset: URL, BLEU: 58.5, chr-F: 0.767
testset: URL, BLEU: 55.1, chr-F: 0.743
testset: URL, BLEU: 10.7, chr-F: 0.423
testset: URL, BLEU: 36.9, chr-F: 0.585
testset: URL, BLEU: 53.7, chr-F: 0.807
testset: URL, BLEU: 31.9, chr-F: 0.715
testset: URL, BLEU: 38.6, chr-F: 0.607
testset: URL, BLEU: 44.8, chr-F: 0.655
testset: URL, BLEU: 49.9, chr-F: 0.691
testset: URL, BLEU: 30.9, chr-F: 0.585
testset: URL, BLEU: 75.8, chr-F: 0.859
testset: URL, BLEU: 50.0, chr-F: 0.661
testset: URL, BLEU: 7.9, chr-F: 0.246
testset: URL, BLEU: 24.6, chr-F: 0.569
testset: URL, BLEU: 44.3, chr-F: 0.652
testset: URL, BLEU: 50.8, chr-F: 0.690
testset: URL, BLEU: 4.9, chr-F: 0.240
testset: URL, BLEU: 52.9, chr-F: 0.687
testset: URL, BLEU: 16.3, chr-F: 0.367
testset: URL, BLEU: 12.7, chr-F: 0.245
testset: URL, BLEU: 32.9, chr-F: 0.531
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 40.3, chr-F: 0.626
testset: URL, BLEU: 19.3, chr-F: 0.535
testset: URL, BLEU: 45.0, chr-F: 0.650
testset: URL, BLEU: 53.5, chr-F: 0.709
testset: URL, BLEU: 50.7, chr-F: 0.684
testset: URL, BLEU: 17.9, chr-F: 0.366
testset: URL, BLEU: 23.6, chr-F: 0.548
testset: URL, BLEU: 54.2, chr-F: 0.833
testset: URL, BLEU: 12.1, chr-F: 0.371
testset: URL, BLEU: 19.3, chr-F: 0.577
testset: URL, BLEU: 53.7, chr-F: 0.833
testset: URL, BLEU: 34.2, chr-F: 0.745
testset: URL, BLEU: 42.7, chr-F: 0.708
testset: URL, BLEU: 48.5, chr-F: 0.672
testset: URL, BLEU: 10.1, chr-F: 0.355
testset: URL, BLEU: 10.6, chr-F: 0.275
testset: URL, BLEU: 7.5, chr-F: 0.230
testset: URL, BLEU: 29.8, chr-F: 0.533
testset: URL, BLEU: 36.8, chr-F: 0.578
testset: URL, BLEU: 43.6, chr-F: 0.626
testset: URL, BLEU: 0.9, chr-F: 0.097
testset: URL, BLEU: 42.4, chr-F: 0.644
testset: URL, BLEU: 19.3, chr-F: 0.535
testset: URL, BLEU: 0.7, chr-F: 0.109
testset: URL, BLEU: 49.6, chr-F: 0.680
testset: URL, BLEU: 7.3, chr-F: 0.262
testset: URL, BLEU: 46.8, chr-F: 0.664
testset: URL, BLEU: 34.4, chr-F: 0.577
testset: URL, BLEU: 45.5, chr-F: 0.657
testset: URL, BLEU: 48.0, chr-F: 0.659
testset: URL, BLEU: 10.7, chr-F: 0.029
testset: URL, BLEU: 44.6, chr-F: 0.655
testset: URL, BLEU: 34.9, chr-F: 0.617
testset: URL, BLEU: 0.1, chr-F: 0.073
testset: URL, BLEU: 45.2, chr-F: 0.659
testset: URL, BLEU: 30.4, chr-F: 0.476
testset: URL, BLEU: 57.6, chr-F: 0.751
testset: URL, BLEU: 42.5, chr-F: 0.604
testset: URL, BLEU: 39.6, chr-F: 0.601
testset: URL, BLEU: 47.2, chr-F: 0.638
testset: URL, BLEU: 36.4, chr-F: 0.549
testset: URL, BLEU: 36.9, chr-F: 0.597
testset: URL, BLEU: 56.4, chr-F: 0.733
testset: URL, BLEU: 52.1, chr-F: 0.686
testset: URL, BLEU: 47.1, chr-F: 0.670
testset: URL, BLEU: 20.8, chr-F: 0.548
testset: URL, BLEU: 0.2, chr-F: 0.058
testset: URL, BLEU: 50.1, chr-F: 0.695
testset: URL, BLEU: 63.9, chr-F: 0.790
testset: URL, BLEU: 14.5, chr-F: 0.288
### System Info:
* hf\_name: sla-sla
* source\_languages: sla
* target\_languages: sla
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']
* src\_constituents: {'bel', 'hrv', 'orv\_Cyrl', 'mkd', 'bel\_Latn', 'srp\_Latn', 'bul\_Latn', 'ces', 'bos\_Latn', 'csb\_Latn', 'dsb', 'hsb', 'rus', 'srp\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
* tgt\_constituents: {'bel', 'hrv', 'orv\_Cyrl', 'mkd', 'bel\_Latn', 'srp\_Latn', 'bul\_Latn', 'ces', 'bos\_Latn', 'csb\_Latn', 'dsb', 'hsb', 'rus', 'srp\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: sla
* tgt\_alpha3: sla
* short\_pair: sla-sla
* chrF2\_score: 0.672
* bleu: 48.5
* brevity\_penalty: 1.0
* ref\_len: 59320.0
* src\_name: Slavic languages
* tgt\_name: Slavic languages
* train\_date: 2020-07-27
* src\_alpha2: sla
* tgt\_alpha2: sla
* prefer\_old: False
* long\_pair: sla-sla
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### sla-sla\n\n\n* source group: Slavic languages\n* target group: Slavic languages\n* OPUS readme: sla-sla\n* model: transformer\n* source language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces dsb hrv hsb mkd orv\\_Cyrl pol rus slv srp\\_Cyrl srp\\_Latn ukr\n* target language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces dsb hrv hsb mkd orv\\_Cyrl pol rus slv srp\\_Cyrl srp\\_Latn ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.9, chr-F: 0.437\ntestset: URL, BLEU: 13.6, chr-F: 0.403\ntestset: URL, BLEU: 19.8, chr-F: 0.473\ntestset: URL, BLEU: 17.9, chr-F: 0.449\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 33.5, chr-F: 0.630\ntestset: URL, BLEU: 45.4, chr-F: 0.644\ntestset: URL, BLEU: 19.3, chr-F: 0.531\ntestset: URL, BLEU: 46.9, chr-F: 0.681\ntestset: URL, BLEU: 58.5, chr-F: 0.767\ntestset: URL, BLEU: 55.1, chr-F: 0.743\ntestset: URL, BLEU: 10.7, chr-F: 0.423\ntestset: URL, BLEU: 36.9, chr-F: 0.585\ntestset: URL, BLEU: 53.7, chr-F: 0.807\ntestset: URL, BLEU: 31.9, chr-F: 0.715\ntestset: URL, BLEU: 38.6, chr-F: 0.607\ntestset: URL, BLEU: 44.8, chr-F: 0.655\ntestset: URL, BLEU: 49.9, chr-F: 0.691\ntestset: URL, BLEU: 30.9, chr-F: 0.585\ntestset: URL, BLEU: 75.8, chr-F: 0.859\ntestset: URL, BLEU: 50.0, chr-F: 0.661\ntestset: URL, BLEU: 7.9, chr-F: 0.246\ntestset: URL, BLEU: 24.6, chr-F: 0.569\ntestset: URL, BLEU: 44.3, chr-F: 0.652\ntestset: URL, BLEU: 50.8, chr-F: 0.690\ntestset: URL, BLEU: 4.9, chr-F: 0.240\ntestset: URL, BLEU: 52.9, chr-F: 0.687\ntestset: URL, BLEU: 16.3, chr-F: 0.367\ntestset: URL, BLEU: 12.7, chr-F: 0.245\ntestset: URL, BLEU: 32.9, chr-F: 0.531\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 40.3, chr-F: 0.626\ntestset: URL, BLEU: 19.3, chr-F: 0.535\ntestset: URL, BLEU: 45.0, chr-F: 0.650\ntestset: URL, BLEU: 53.5, chr-F: 0.709\ntestset: URL, BLEU: 50.7, chr-F: 0.684\ntestset: URL, BLEU: 17.9, chr-F: 0.366\ntestset: URL, BLEU: 23.6, chr-F: 0.548\ntestset: URL, BLEU: 54.2, chr-F: 0.833\ntestset: URL, BLEU: 12.1, chr-F: 0.371\ntestset: URL, BLEU: 19.3, chr-F: 0.577\ntestset: URL, BLEU: 53.7, chr-F: 0.833\ntestset: URL, BLEU: 34.2, chr-F: 0.745\ntestset: URL, BLEU: 42.7, chr-F: 0.708\ntestset: URL, BLEU: 48.5, chr-F: 0.672\ntestset: URL, BLEU: 10.1, chr-F: 0.355\ntestset: URL, BLEU: 10.6, chr-F: 0.275\ntestset: URL, BLEU: 7.5, chr-F: 0.230\ntestset: URL, BLEU: 29.8, chr-F: 0.533\ntestset: URL, BLEU: 36.8, chr-F: 0.578\ntestset: URL, BLEU: 43.6, chr-F: 0.626\ntestset: URL, BLEU: 0.9, chr-F: 0.097\ntestset: URL, BLEU: 42.4, chr-F: 0.644\ntestset: URL, BLEU: 19.3, chr-F: 0.535\ntestset: URL, BLEU: 0.7, chr-F: 0.109\ntestset: URL, BLEU: 49.6, chr-F: 0.680\ntestset: URL, BLEU: 7.3, chr-F: 0.262\ntestset: URL, BLEU: 46.8, chr-F: 0.664\ntestset: URL, BLEU: 34.4, chr-F: 0.577\ntestset: URL, BLEU: 45.5, chr-F: 0.657\ntestset: URL, BLEU: 48.0, chr-F: 0.659\ntestset: URL, BLEU: 10.7, chr-F: 0.029\ntestset: URL, BLEU: 44.6, chr-F: 0.655\ntestset: URL, BLEU: 34.9, chr-F: 0.617\ntestset: URL, BLEU: 0.1, chr-F: 0.073\ntestset: URL, BLEU: 45.2, chr-F: 0.659\ntestset: URL, BLEU: 30.4, chr-F: 0.476\ntestset: URL, BLEU: 57.6, chr-F: 0.751\ntestset: URL, BLEU: 42.5, chr-F: 0.604\ntestset: URL, BLEU: 39.6, chr-F: 0.601\ntestset: URL, BLEU: 47.2, chr-F: 0.638\ntestset: URL, BLEU: 36.4, chr-F: 0.549\ntestset: URL, BLEU: 36.9, chr-F: 0.597\ntestset: URL, BLEU: 56.4, chr-F: 0.733\ntestset: URL, BLEU: 52.1, chr-F: 0.686\ntestset: URL, BLEU: 47.1, chr-F: 0.670\ntestset: URL, BLEU: 20.8, chr-F: 0.548\ntestset: URL, BLEU: 0.2, chr-F: 0.058\ntestset: URL, BLEU: 50.1, chr-F: 0.695\ntestset: URL, BLEU: 63.9, chr-F: 0.790\ntestset: URL, BLEU: 14.5, chr-F: 0.288",
"### System Info:\n\n\n* hf\\_name: sla-sla\n* source\\_languages: sla\n* target\\_languages: sla\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']\n* src\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* tgt\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sla\n* tgt\\_alpha3: sla\n* short\\_pair: sla-sla\n* chrF2\\_score: 0.672\n* bleu: 48.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 59320.0\n* src\\_name: Slavic languages\n* tgt\\_name: Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: sla\n* tgt\\_alpha2: sla\n* prefer\\_old: False\n* long\\_pair: sla-sla\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #be #hr #mk #cs #ru #pl #bg #uk #sl #sla #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### sla-sla\n\n\n* source group: Slavic languages\n* target group: Slavic languages\n* OPUS readme: sla-sla\n* model: transformer\n* source language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces dsb hrv hsb mkd orv\\_Cyrl pol rus slv srp\\_Cyrl srp\\_Latn ukr\n* target language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces dsb hrv hsb mkd orv\\_Cyrl pol rus slv srp\\_Cyrl srp\\_Latn ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.9, chr-F: 0.437\ntestset: URL, BLEU: 13.6, chr-F: 0.403\ntestset: URL, BLEU: 19.8, chr-F: 0.473\ntestset: URL, BLEU: 17.9, chr-F: 0.449\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 33.5, chr-F: 0.630\ntestset: URL, BLEU: 45.4, chr-F: 0.644\ntestset: URL, BLEU: 19.3, chr-F: 0.531\ntestset: URL, BLEU: 46.9, chr-F: 0.681\ntestset: URL, BLEU: 58.5, chr-F: 0.767\ntestset: URL, BLEU: 55.1, chr-F: 0.743\ntestset: URL, BLEU: 10.7, chr-F: 0.423\ntestset: URL, BLEU: 36.9, chr-F: 0.585\ntestset: URL, BLEU: 53.7, chr-F: 0.807\ntestset: URL, BLEU: 31.9, chr-F: 0.715\ntestset: URL, BLEU: 38.6, chr-F: 0.607\ntestset: URL, BLEU: 44.8, chr-F: 0.655\ntestset: URL, BLEU: 49.9, chr-F: 0.691\ntestset: URL, BLEU: 30.9, chr-F: 0.585\ntestset: URL, BLEU: 75.8, chr-F: 0.859\ntestset: URL, BLEU: 50.0, chr-F: 0.661\ntestset: URL, BLEU: 7.9, chr-F: 0.246\ntestset: URL, BLEU: 24.6, chr-F: 0.569\ntestset: URL, BLEU: 44.3, chr-F: 0.652\ntestset: URL, BLEU: 50.8, chr-F: 0.690\ntestset: URL, BLEU: 4.9, chr-F: 0.240\ntestset: URL, BLEU: 52.9, chr-F: 0.687\ntestset: URL, BLEU: 16.3, chr-F: 0.367\ntestset: URL, BLEU: 12.7, chr-F: 0.245\ntestset: URL, BLEU: 32.9, chr-F: 0.531\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 40.3, chr-F: 0.626\ntestset: URL, BLEU: 19.3, chr-F: 0.535\ntestset: URL, BLEU: 45.0, chr-F: 0.650\ntestset: URL, BLEU: 53.5, chr-F: 0.709\ntestset: URL, BLEU: 50.7, chr-F: 0.684\ntestset: URL, BLEU: 17.9, chr-F: 0.366\ntestset: URL, BLEU: 23.6, chr-F: 0.548\ntestset: URL, BLEU: 54.2, chr-F: 0.833\ntestset: URL, BLEU: 12.1, chr-F: 0.371\ntestset: URL, BLEU: 19.3, chr-F: 0.577\ntestset: URL, BLEU: 53.7, chr-F: 0.833\ntestset: URL, BLEU: 34.2, chr-F: 0.745\ntestset: URL, BLEU: 42.7, chr-F: 0.708\ntestset: URL, BLEU: 48.5, chr-F: 0.672\ntestset: URL, BLEU: 10.1, chr-F: 0.355\ntestset: URL, BLEU: 10.6, chr-F: 0.275\ntestset: URL, BLEU: 7.5, chr-F: 0.230\ntestset: URL, BLEU: 29.8, chr-F: 0.533\ntestset: URL, BLEU: 36.8, chr-F: 0.578\ntestset: URL, BLEU: 43.6, chr-F: 0.626\ntestset: URL, BLEU: 0.9, chr-F: 0.097\ntestset: URL, BLEU: 42.4, chr-F: 0.644\ntestset: URL, BLEU: 19.3, chr-F: 0.535\ntestset: URL, BLEU: 0.7, chr-F: 0.109\ntestset: URL, BLEU: 49.6, chr-F: 0.680\ntestset: URL, BLEU: 7.3, chr-F: 0.262\ntestset: URL, BLEU: 46.8, chr-F: 0.664\ntestset: URL, BLEU: 34.4, chr-F: 0.577\ntestset: URL, BLEU: 45.5, chr-F: 0.657\ntestset: URL, BLEU: 48.0, chr-F: 0.659\ntestset: URL, BLEU: 10.7, chr-F: 0.029\ntestset: URL, BLEU: 44.6, chr-F: 0.655\ntestset: URL, BLEU: 34.9, chr-F: 0.617\ntestset: URL, BLEU: 0.1, chr-F: 0.073\ntestset: URL, BLEU: 45.2, chr-F: 0.659\ntestset: URL, BLEU: 30.4, chr-F: 0.476\ntestset: URL, BLEU: 57.6, chr-F: 0.751\ntestset: URL, BLEU: 42.5, chr-F: 0.604\ntestset: URL, BLEU: 39.6, chr-F: 0.601\ntestset: URL, BLEU: 47.2, chr-F: 0.638\ntestset: URL, BLEU: 36.4, chr-F: 0.549\ntestset: URL, BLEU: 36.9, chr-F: 0.597\ntestset: URL, BLEU: 56.4, chr-F: 0.733\ntestset: URL, BLEU: 52.1, chr-F: 0.686\ntestset: URL, BLEU: 47.1, chr-F: 0.670\ntestset: URL, BLEU: 20.8, chr-F: 0.548\ntestset: URL, BLEU: 0.2, chr-F: 0.058\ntestset: URL, BLEU: 50.1, chr-F: 0.695\ntestset: URL, BLEU: 63.9, chr-F: 0.790\ntestset: URL, BLEU: 14.5, chr-F: 0.288",
"### System Info:\n\n\n* hf\\_name: sla-sla\n* source\\_languages: sla\n* target\\_languages: sla\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']\n* src\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* tgt\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sla\n* tgt\\_alpha3: sla\n* short\\_pair: sla-sla\n* chrF2\\_score: 0.672\n* bleu: 48.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 59320.0\n* src\\_name: Slavic languages\n* tgt\\_name: Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: sla\n* tgt\\_alpha2: sla\n* prefer\\_old: False\n* long\\_pair: sla-sla\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
69,
2097,
677
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #be #hr #mk #cs #ru #pl #bg #uk #sl #sla #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### sla-sla\n\n\n* source group: Slavic languages\n* target group: Slavic languages\n* OPUS readme: sla-sla\n* model: transformer\n* source language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces dsb hrv hsb mkd orv\\_Cyrl pol rus slv srp\\_Cyrl srp\\_Latn ukr\n* target language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces dsb hrv hsb mkd orv\\_Cyrl pol rus slv srp\\_Cyrl srp\\_Latn ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.9, chr-F: 0.437\ntestset: URL, BLEU: 13.6, chr-F: 0.403\ntestset: URL, BLEU: 19.8, chr-F: 0.473\ntestset: URL, BLEU: 17.9, chr-F: 0.449\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 33.5, chr-F: 0.630\ntestset: URL, BLEU: 45.4, chr-F: 0.644\ntestset: URL, BLEU: 19.3, chr-F: 0.531\ntestset: URL, BLEU: 46.9, chr-F: 0.681\ntestset: URL, BLEU: 58.5, chr-F: 0.767\ntestset: URL, BLEU: 55.1, chr-F: 0.743\ntestset: URL, BLEU: 10.7, chr-F: 0.423\ntestset: URL, BLEU: 36.9, chr-F: 0.585\ntestset: URL, BLEU: 53.7, chr-F: 0.807\ntestset: URL, BLEU: 31.9, chr-F: 0.715\ntestset: URL, BLEU: 38.6, chr-F: 0.607\ntestset: URL, BLEU: 44.8, chr-F: 0.655\ntestset: URL, BLEU: 49.9, chr-F: 0.691\ntestset: URL, BLEU: 30.9, chr-F: 0.585\ntestset: URL, BLEU: 75.8, chr-F: 0.859\ntestset: URL, BLEU: 50.0, chr-F: 0.661\ntestset: URL, BLEU: 7.9, chr-F: 0.246\ntestset: URL, BLEU: 24.6, chr-F: 0.569\ntestset: URL, BLEU: 44.3, chr-F: 0.652\ntestset: URL, BLEU: 50.8, chr-F: 0.690\ntestset: URL, BLEU: 4.9, chr-F: 0.240\ntestset: URL, BLEU: 52.9, chr-F: 0.687\ntestset: URL, BLEU: 16.3, chr-F: 0.367\ntestset: URL, BLEU: 12.7, chr-F: 0.245\ntestset: URL, BLEU: 32.9, chr-F: 0.531\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 40.3, chr-F: 0.626\ntestset: URL, BLEU: 19.3, chr-F: 0.535\ntestset: URL, BLEU: 45.0, chr-F: 0.650\ntestset: URL, BLEU: 53.5, chr-F: 0.709\ntestset: URL, BLEU: 50.7, chr-F: 0.684\ntestset: URL, BLEU: 17.9, chr-F: 0.366\ntestset: URL, BLEU: 23.6, chr-F: 0.548\ntestset: URL, BLEU: 54.2, chr-F: 0.833\ntestset: URL, BLEU: 12.1, chr-F: 0.371\ntestset: URL, BLEU: 19.3, chr-F: 0.577\ntestset: URL, BLEU: 53.7, chr-F: 0.833\ntestset: URL, BLEU: 34.2, chr-F: 0.745\ntestset: URL, BLEU: 42.7, chr-F: 0.708\ntestset: URL, BLEU: 48.5, chr-F: 0.672\ntestset: URL, BLEU: 10.1, chr-F: 0.355\ntestset: URL, BLEU: 10.6, chr-F: 0.275\ntestset: URL, BLEU: 7.5, chr-F: 0.230\ntestset: URL, BLEU: 29.8, chr-F: 0.533\ntestset: URL, BLEU: 36.8, chr-F: 0.578\ntestset: URL, BLEU: 43.6, chr-F: 0.626\ntestset: URL, BLEU: 0.9, chr-F: 0.097\ntestset: URL, BLEU: 42.4, chr-F: 0.644\ntestset: URL, BLEU: 19.3, chr-F: 0.535\ntestset: URL, BLEU: 0.7, chr-F: 0.109\ntestset: URL, BLEU: 49.6, chr-F: 0.680\ntestset: URL, BLEU: 7.3, chr-F: 0.262\ntestset: URL, BLEU: 46.8, chr-F: 0.664\ntestset: URL, BLEU: 34.4, chr-F: 0.577\ntestset: URL, BLEU: 45.5, chr-F: 0.657\ntestset: URL, BLEU: 48.0, chr-F: 0.659\ntestset: URL, BLEU: 10.7, chr-F: 0.029\ntestset: URL, BLEU: 44.6, chr-F: 0.655\ntestset: URL, BLEU: 34.9, chr-F: 0.617\ntestset: URL, BLEU: 0.1, chr-F: 0.073\ntestset: URL, BLEU: 45.2, chr-F: 0.659\ntestset: URL, BLEU: 30.4, chr-F: 0.476\ntestset: URL, BLEU: 57.6, chr-F: 0.751\ntestset: URL, BLEU: 42.5, chr-F: 0.604\ntestset: URL, BLEU: 39.6, chr-F: 0.601\ntestset: URL, BLEU: 47.2, chr-F: 0.638\ntestset: URL, BLEU: 36.4, chr-F: 0.549\ntestset: URL, BLEU: 36.9, chr-F: 0.597\ntestset: URL, BLEU: 56.4, chr-F: 0.733\ntestset: URL, BLEU: 52.1, chr-F: 0.686\ntestset: URL, BLEU: 47.1, chr-F: 0.670\ntestset: URL, BLEU: 20.8, chr-F: 0.548\ntestset: URL, BLEU: 0.2, chr-F: 0.058\ntestset: URL, BLEU: 50.1, chr-F: 0.695\ntestset: URL, BLEU: 63.9, chr-F: 0.790\ntestset: URL, BLEU: 14.5, chr-F: 0.288### System Info:\n\n\n* hf\\_name: sla-sla\n* source\\_languages: sla\n* target\\_languages: sla\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']\n* src\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* tgt\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: sla\n* tgt\\_alpha3: sla\n* short\\_pair: sla-sla\n* chrF2\\_score: 0.672\n* bleu: 48.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 59320.0\n* src\\_name: Slavic languages\n* tgt\\_name: Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: sla\n* tgt\\_alpha2: sla\n* prefer\\_old: False\n* long\\_pair: sla-sla\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-sm-en
* source languages: sm
* target languages: en
* OPUS readme: [sm-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sm-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sm-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sm-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sm-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sm.en | 36.1 | 0.520 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sm-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sm #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sm-en
* source languages: sm
* target languages: en
* OPUS readme: sm-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.1, chr-F: 0.520
| [
"### opus-mt-sm-en\n\n\n* source languages: sm\n* target languages: en\n* OPUS readme: sm-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.520"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sm #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sm-en\n\n\n* source languages: sm\n* target languages: en\n* OPUS readme: sm-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.520"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sm #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sm-en\n\n\n* source languages: sm\n* target languages: en\n* OPUS readme: sm-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.520"
] |
translation | transformers |
### opus-mt-sm-es
* source languages: sm
* target languages: es
* OPUS readme: [sm-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sm-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sm-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sm-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sm-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sm.es | 21.3 | 0.390 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sm-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sm",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sm #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sm-es
* source languages: sm
* target languages: es
* OPUS readme: sm-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.3, chr-F: 0.390
| [
"### opus-mt-sm-es\n\n\n* source languages: sm\n* target languages: es\n* OPUS readme: sm-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.3, chr-F: 0.390"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sm #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sm-es\n\n\n* source languages: sm\n* target languages: es\n* OPUS readme: sm-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.3, chr-F: 0.390"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sm #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sm-es\n\n\n* source languages: sm\n* target languages: es\n* OPUS readme: sm-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.3, chr-F: 0.390"
] |
translation | transformers |
### opus-mt-sm-fr
* source languages: sm
* target languages: fr
* OPUS readme: [sm-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sm-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sm-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sm-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sm-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sm.fr | 24.6 | 0.419 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sm-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sm",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sm #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sm-fr
* source languages: sm
* target languages: fr
* OPUS readme: sm-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.6, chr-F: 0.419
| [
"### opus-mt-sm-fr\n\n\n* source languages: sm\n* target languages: fr\n* OPUS readme: sm-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.419"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sm #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sm-fr\n\n\n* source languages: sm\n* target languages: fr\n* OPUS readme: sm-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.419"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sm #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sm-fr\n\n\n* source languages: sm\n* target languages: fr\n* OPUS readme: sm-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.419"
] |
translation | transformers |
### opus-mt-sn-en
* source languages: sn
* target languages: en
* OPUS readme: [sn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.en | 51.8 | 0.648 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sn-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sn-en
* source languages: sn
* target languages: en
* OPUS readme: sn-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 51.8, chr-F: 0.648
| [
"### opus-mt-sn-en\n\n\n* source languages: sn\n* target languages: en\n* OPUS readme: sn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.8, chr-F: 0.648"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sn-en\n\n\n* source languages: sn\n* target languages: en\n* OPUS readme: sn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.8, chr-F: 0.648"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sn-en\n\n\n* source languages: sn\n* target languages: en\n* OPUS readme: sn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.8, chr-F: 0.648"
] |
translation | transformers |
### opus-mt-sn-es
* source languages: sn
* target languages: es
* OPUS readme: [sn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.es | 32.5 | 0.509 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sn-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sn-es
* source languages: sn
* target languages: es
* OPUS readme: sn-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.5, chr-F: 0.509
| [
"### opus-mt-sn-es\n\n\n* source languages: sn\n* target languages: es\n* OPUS readme: sn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.5, chr-F: 0.509"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sn-es\n\n\n* source languages: sn\n* target languages: es\n* OPUS readme: sn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.5, chr-F: 0.509"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sn-es\n\n\n* source languages: sn\n* target languages: es\n* OPUS readme: sn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.5, chr-F: 0.509"
] |
translation | transformers |
### opus-mt-sn-fr
* source languages: sn
* target languages: fr
* OPUS readme: [sn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.fr | 30.8 | 0.491 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sn-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sn-fr
* source languages: sn
* target languages: fr
* OPUS readme: sn-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.8, chr-F: 0.491
| [
"### opus-mt-sn-fr\n\n\n* source languages: sn\n* target languages: fr\n* OPUS readme: sn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.8, chr-F: 0.491"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sn-fr\n\n\n* source languages: sn\n* target languages: fr\n* OPUS readme: sn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.8, chr-F: 0.491"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sn-fr\n\n\n* source languages: sn\n* target languages: fr\n* OPUS readme: sn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.8, chr-F: 0.491"
] |
translation | transformers |
### opus-mt-sn-sv
* source languages: sn
* target languages: sv
* OPUS readme: [sn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.sv | 35.6 | 0.536 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sn-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sn",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sn-sv
* source languages: sn
* target languages: sv
* OPUS readme: sn-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.6, chr-F: 0.536
| [
"### opus-mt-sn-sv\n\n\n* source languages: sn\n* target languages: sv\n* OPUS readme: sn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.6, chr-F: 0.536"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sn-sv\n\n\n* source languages: sn\n* target languages: sv\n* OPUS readme: sn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.6, chr-F: 0.536"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sn-sv\n\n\n* source languages: sn\n* target languages: sv\n* OPUS readme: sn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.6, chr-F: 0.536"
] |
translation | transformers |
### opus-mt-sq-en
* source languages: sq
* target languages: en
* OPUS readme: [sq-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sq-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sq-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.sq.en | 58.4 | 0.732 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sq-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sq",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sq #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sq-en
* source languages: sq
* target languages: en
* OPUS readme: sq-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 58.4, chr-F: 0.732
| [
"### opus-mt-sq-en\n\n\n* source languages: sq\n* target languages: en\n* OPUS readme: sq-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.4, chr-F: 0.732"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sq #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sq-en\n\n\n* source languages: sq\n* target languages: en\n* OPUS readme: sq-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.4, chr-F: 0.732"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sq #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sq-en\n\n\n* source languages: sq\n* target languages: en\n* OPUS readme: sq-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.4, chr-F: 0.732"
] |
translation | transformers |
### opus-mt-sq-es
* source languages: sq
* target languages: es
* OPUS readme: [sq-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sq-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sq-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.sq.es | 23.9 | 0.510 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sq-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sq",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sq #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sq-es
* source languages: sq
* target languages: es
* OPUS readme: sq-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.9, chr-F: 0.510
| [
"### opus-mt-sq-es\n\n\n* source languages: sq\n* target languages: es\n* OPUS readme: sq-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.9, chr-F: 0.510"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sq #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sq-es\n\n\n* source languages: sq\n* target languages: es\n* OPUS readme: sq-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.9, chr-F: 0.510"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sq #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sq-es\n\n\n* source languages: sq\n* target languages: es\n* OPUS readme: sq-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.9, chr-F: 0.510"
] |
translation | transformers |
### opus-mt-sq-sv
* source languages: sq
* target languages: sv
* OPUS readme: [sq-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sq-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sq-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sq.sv | 36.2 | 0.559 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sq-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sq",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sq #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sq-sv
* source languages: sq
* target languages: sv
* OPUS readme: sq-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.2, chr-F: 0.559
| [
"### opus-mt-sq-sv\n\n\n* source languages: sq\n* target languages: sv\n* OPUS readme: sq-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.2, chr-F: 0.559"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sq #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sq-sv\n\n\n* source languages: sq\n* target languages: sv\n* OPUS readme: sq-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.2, chr-F: 0.559"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sq #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sq-sv\n\n\n* source languages: sq\n* target languages: sv\n* OPUS readme: sq-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.2, chr-F: 0.559"
] |
translation | transformers |
### opus-mt-srn-en
* source languages: srn
* target languages: en
* OPUS readme: [srn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.en | 40.3 | 0.555 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-srn-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"srn",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #srn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-srn-en
* source languages: srn
* target languages: en
* OPUS readme: srn-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.3, chr-F: 0.555
| [
"### opus-mt-srn-en\n\n\n* source languages: srn\n* target languages: en\n* OPUS readme: srn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.3, chr-F: 0.555"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #srn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-srn-en\n\n\n* source languages: srn\n* target languages: en\n* OPUS readme: srn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.3, chr-F: 0.555"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #srn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-srn-en\n\n\n* source languages: srn\n* target languages: en\n* OPUS readme: srn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.3, chr-F: 0.555"
] |
translation | transformers |
### opus-mt-srn-es
* source languages: srn
* target languages: es
* OPUS readme: [srn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.es | 30.4 | 0.481 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-srn-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"srn",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #srn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-srn-es
* source languages: srn
* target languages: es
* OPUS readme: srn-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.4, chr-F: 0.481
| [
"### opus-mt-srn-es\n\n\n* source languages: srn\n* target languages: es\n* OPUS readme: srn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.481"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #srn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-srn-es\n\n\n* source languages: srn\n* target languages: es\n* OPUS readme: srn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.481"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #srn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-srn-es\n\n\n* source languages: srn\n* target languages: es\n* OPUS readme: srn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.481"
] |
translation | transformers |
### opus-mt-srn-fr
* source languages: srn
* target languages: fr
* OPUS readme: [srn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.fr | 28.9 | 0.462 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-srn-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"srn",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #srn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-srn-fr
* source languages: srn
* target languages: fr
* OPUS readme: srn-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.9, chr-F: 0.462
| [
"### opus-mt-srn-fr\n\n\n* source languages: srn\n* target languages: fr\n* OPUS readme: srn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.462"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #srn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-srn-fr\n\n\n* source languages: srn\n* target languages: fr\n* OPUS readme: srn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.462"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #srn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-srn-fr\n\n\n* source languages: srn\n* target languages: fr\n* OPUS readme: srn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.462"
] |
translation | transformers |
### opus-mt-srn-sv
* source languages: srn
* target languages: sv
* OPUS readme: [srn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.sv | 32.2 | 0.500 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-srn-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"srn",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #srn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-srn-sv
* source languages: srn
* target languages: sv
* OPUS readme: srn-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.2, chr-F: 0.500
| [
"### opus-mt-srn-sv\n\n\n* source languages: srn\n* target languages: sv\n* OPUS readme: srn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.2, chr-F: 0.500"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #srn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-srn-sv\n\n\n* source languages: srn\n* target languages: sv\n* OPUS readme: srn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.2, chr-F: 0.500"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #srn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-srn-sv\n\n\n* source languages: srn\n* target languages: sv\n* OPUS readme: srn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.2, chr-F: 0.500"
] |
translation | transformers |
### opus-mt-ss-en
* source languages: ss
* target languages: en
* OPUS readme: [ss-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ss-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ss-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ss-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ss-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ss.en | 30.9 | 0.478 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ss-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ss",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ss #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ss-en
* source languages: ss
* target languages: en
* OPUS readme: ss-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.9, chr-F: 0.478
| [
"### opus-mt-ss-en\n\n\n* source languages: ss\n* target languages: en\n* OPUS readme: ss-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.9, chr-F: 0.478"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ss #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ss-en\n\n\n* source languages: ss\n* target languages: en\n* OPUS readme: ss-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.9, chr-F: 0.478"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ss #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ss-en\n\n\n* source languages: ss\n* target languages: en\n* OPUS readme: ss-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.9, chr-F: 0.478"
] |
translation | transformers |
### opus-mt-ssp-es
* source languages: ssp
* target languages: es
* OPUS readme: [ssp-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ssp-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ssp.es | 89.7 | 0.930 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ssp-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ssp",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ssp #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ssp-es
* source languages: ssp
* target languages: es
* OPUS readme: ssp-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 89.7, chr-F: 0.930
| [
"### opus-mt-ssp-es\n\n\n* source languages: ssp\n* target languages: es\n* OPUS readme: ssp-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 89.7, chr-F: 0.930"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ssp #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ssp-es\n\n\n* source languages: ssp\n* target languages: es\n* OPUS readme: ssp-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 89.7, chr-F: 0.930"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ssp #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ssp-es\n\n\n* source languages: ssp\n* target languages: es\n* OPUS readme: ssp-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 89.7, chr-F: 0.930"
] |
translation | transformers |
### opus-mt-st-en
* source languages: st
* target languages: en
* OPUS readme: [st-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.en | 45.7 | 0.609 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-st-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"st",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #st #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-st-en
* source languages: st
* target languages: en
* OPUS readme: st-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 45.7, chr-F: 0.609
| [
"### opus-mt-st-en\n\n\n* source languages: st\n* target languages: en\n* OPUS readme: st-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.7, chr-F: 0.609"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-st-en\n\n\n* source languages: st\n* target languages: en\n* OPUS readme: st-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.7, chr-F: 0.609"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-st-en\n\n\n* source languages: st\n* target languages: en\n* OPUS readme: st-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.7, chr-F: 0.609"
] |
translation | transformers |
### opus-mt-st-es
* source languages: st
* target languages: es
* OPUS readme: [st-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.es | 31.3 | 0.499 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-st-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"st",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #st #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-st-es
* source languages: st
* target languages: es
* OPUS readme: st-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.3, chr-F: 0.499
| [
"### opus-mt-st-es\n\n\n* source languages: st\n* target languages: es\n* OPUS readme: st-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.3, chr-F: 0.499"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-st-es\n\n\n* source languages: st\n* target languages: es\n* OPUS readme: st-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.3, chr-F: 0.499"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-st-es\n\n\n* source languages: st\n* target languages: es\n* OPUS readme: st-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.3, chr-F: 0.499"
] |
translation | transformers |
### opus-mt-st-fi
* source languages: st
* target languages: fi
* OPUS readme: [st-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.fi | 28.8 | 0.520 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-st-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"st",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #st #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-st-fi
* source languages: st
* target languages: fi
* OPUS readme: st-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.8, chr-F: 0.520
| [
"### opus-mt-st-fi\n\n\n* source languages: st\n* target languages: fi\n* OPUS readme: st-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.8, chr-F: 0.520"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-st-fi\n\n\n* source languages: st\n* target languages: fi\n* OPUS readme: st-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.8, chr-F: 0.520"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-st-fi\n\n\n* source languages: st\n* target languages: fi\n* OPUS readme: st-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.8, chr-F: 0.520"
] |
translation | transformers |
### opus-mt-st-fr
* source languages: st
* target languages: fr
* OPUS readme: [st-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.fr | 30.7 | 0.490 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-st-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"st",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #st #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-st-fr
* source languages: st
* target languages: fr
* OPUS readme: st-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.7, chr-F: 0.490
| [
"### opus-mt-st-fr\n\n\n* source languages: st\n* target languages: fr\n* OPUS readme: st-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.490"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-st-fr\n\n\n* source languages: st\n* target languages: fr\n* OPUS readme: st-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.490"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-st-fr\n\n\n* source languages: st\n* target languages: fr\n* OPUS readme: st-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.490"
] |
translation | transformers |
### opus-mt-st-sv
* source languages: st
* target languages: sv
* OPUS readme: [st-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.sv | 33.5 | 0.523 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-st-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"st",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #st #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-st-sv
* source languages: st
* target languages: sv
* OPUS readme: st-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.5, chr-F: 0.523
| [
"### opus-mt-st-sv\n\n\n* source languages: st\n* target languages: sv\n* OPUS readme: st-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.5, chr-F: 0.523"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-st-sv\n\n\n* source languages: st\n* target languages: sv\n* OPUS readme: st-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.5, chr-F: 0.523"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #st #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-st-sv\n\n\n* source languages: st\n* target languages: sv\n* OPUS readme: st-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.5, chr-F: 0.523"
] |
translation | transformers |
### opus-mt-sv-NORWAY
* source languages: sv
* target languages: nb_NO,nb,nn_NO,nn,nog,no_nb,no
* OPUS readme: [sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.no | 39.3 | 0.590 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sv-NORWAY | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sv #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sv-NORWAY
* source languages: sv
* target languages: nb\_NO,nb,nn\_NO,nn,nog,no\_nb,no
* OPUS readme: sv-nb\_NO+nb+nn\_NO+nn+nog+no\_nb+no
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.3, chr-F: 0.590
| [
"### opus-mt-sv-NORWAY\n\n\n* source languages: sv\n* target languages: nb\\_NO,nb,nn\\_NO,nn,nog,no\\_nb,no\n* OPUS readme: sv-nb\\_NO+nb+nn\\_NO+nn+nog+no\\_nb+no\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.3, chr-F: 0.590"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sv #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sv-NORWAY\n\n\n* source languages: sv\n* target languages: nb\\_NO,nb,nn\\_NO,nn,nog,no\\_nb,no\n* OPUS readme: sv-nb\\_NO+nb+nn\\_NO+nn+nog+no\\_nb+no\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.3, chr-F: 0.590"
] | [
51,
186
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sv #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sv-NORWAY\n\n\n* source languages: sv\n* target languages: nb\\_NO,nb,nn\\_NO,nn,nog,no\\_nb,no\n* OPUS readme: sv-nb\\_NO+nb+nn\\_NO+nn+nog+no\\_nb+no\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.3, chr-F: 0.590"
] |
translation | transformers |
### opus-mt-sv-ZH
* source languages: sv
* target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh
* OPUS readme: [sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.sv.zh | 24.2 | 0.342 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sv-ZH | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sv #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sv-ZH
* source languages: sv
* target languages: cmn,cn,yue,ze\_zh,zh\_cn,zh\_CN,zh\_HK,zh\_tw,zh\_TW,zh\_yue,zhs,zht,zh
* OPUS readme: sv-cmn+cn+yue+ze\_zh+zh\_cn+zh\_CN+zh\_HK+zh\_tw+zh\_TW+zh\_yue+zhs+zht+zh
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.2, chr-F: 0.342
| [
"### opus-mt-sv-ZH\n\n\n* source languages: sv\n* target languages: cmn,cn,yue,ze\\_zh,zh\\_cn,zh\\_CN,zh\\_HK,zh\\_tw,zh\\_TW,zh\\_yue,zhs,zht,zh\n* OPUS readme: sv-cmn+cn+yue+ze\\_zh+zh\\_cn+zh\\_CN+zh\\_HK+zh\\_tw+zh\\_TW+zh\\_yue+zhs+zht+zh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.342"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sv #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sv-ZH\n\n\n* source languages: sv\n* target languages: cmn,cn,yue,ze\\_zh,zh\\_cn,zh\\_CN,zh\\_HK,zh\\_tw,zh\\_TW,zh\\_yue,zhs,zht,zh\n* OPUS readme: sv-cmn+cn+yue+ze\\_zh+zh\\_cn+zh\\_CN+zh\\_HK+zh\\_tw+zh\\_TW+zh\\_yue+zhs+zht+zh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.342"
] | [
52,
250
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sv #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sv-ZH\n\n\n* source languages: sv\n* target languages: cmn,cn,yue,ze\\_zh,zh\\_cn,zh\\_CN,zh\\_HK,zh\\_tw,zh\\_TW,zh\\_yue,zhs,zht,zh\n* OPUS readme: sv-cmn+cn+yue+ze\\_zh+zh\\_cn+zh\\_CN+zh\\_HK+zh\\_tw+zh\\_TW+zh\\_yue+zhs+zht+zh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.342"
] |
translation | transformers |
### opus-mt-sv-af
* source languages: sv
* target languages: af
* OPUS readme: [sv-af](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-af/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-af/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-af/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-af/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.af | 44.4 | 0.623 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sv-af | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"af",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #sv #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sv-af
* source languages: sv
* target languages: af
* OPUS readme: sv-af
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 44.4, chr-F: 0.623
| [
"### opus-mt-sv-af\n\n\n* source languages: sv\n* target languages: af\n* OPUS readme: sv-af\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.4, chr-F: 0.623"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sv #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sv-af\n\n\n* source languages: sv\n* target languages: af\n* OPUS readme: sv-af\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.4, chr-F: 0.623"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sv #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sv-af\n\n\n* source languages: sv\n* target languages: af\n* OPUS readme: sv-af\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.4, chr-F: 0.623"
] |
translation | transformers |
### opus-mt-sv-ase
* source languages: sv
* target languages: ase
* OPUS readme: [sv-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ase/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ase/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ase/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ase/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ase | 40.5 | 0.572 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-sv-ase | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"sv",
"ase",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #sv #ase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-sv-ase
* source languages: sv
* target languages: ase
* OPUS readme: sv-ase
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.5, chr-F: 0.572
| [
"### opus-mt-sv-ase\n\n\n* source languages: sv\n* target languages: ase\n* OPUS readme: sv-ase\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.5, chr-F: 0.572"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #sv #ase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-sv-ase\n\n\n* source languages: sv\n* target languages: ase\n* OPUS readme: sv-ase\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.5, chr-F: 0.572"
] | [
49,
109
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #sv #ase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-sv-ase\n\n\n* source languages: sv\n* target languages: ase\n* OPUS readme: sv-ase\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.5, chr-F: 0.572"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.