pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
translation | transformers |
### opus-mt-ee-fi
* source languages: ee
* target languages: fi
* OPUS readme: [ee-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.fi | 25.0 | 0.482 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ee-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ee #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ee-fi
* source languages: ee
* target languages: fi
* OPUS readme: ee-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.0, chr-F: 0.482
| [
"### opus-mt-ee-fi\n\n\n* source languages: ee\n* target languages: fi\n* OPUS readme: ee-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.482"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ee #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ee-fi\n\n\n* source languages: ee\n* target languages: fi\n* OPUS readme: ee-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.482"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ee #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ee-fi\n\n\n* source languages: ee\n* target languages: fi\n* OPUS readme: ee-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.482"
] |
translation | transformers |
### opus-mt-ee-fr
* source languages: ee
* target languages: fr
* OPUS readme: [ee-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.fr | 27.1 | 0.450 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ee-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ee #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ee-fr
* source languages: ee
* target languages: fr
* OPUS readme: ee-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.1, chr-F: 0.450
| [
"### opus-mt-ee-fr\n\n\n* source languages: ee\n* target languages: fr\n* OPUS readme: ee-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.1, chr-F: 0.450"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ee #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ee-fr\n\n\n* source languages: ee\n* target languages: fr\n* OPUS readme: ee-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.1, chr-F: 0.450"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ee #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ee-fr\n\n\n* source languages: ee\n* target languages: fr\n* OPUS readme: ee-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.1, chr-F: 0.450"
] |
translation | transformers |
### opus-mt-ee-sv
* source languages: ee
* target languages: sv
* OPUS readme: [ee-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ee-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ee-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ee-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ee.sv | 28.9 | 0.472 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ee-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ee",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ee #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ee-sv
* source languages: ee
* target languages: sv
* OPUS readme: ee-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.9, chr-F: 0.472
| [
"### opus-mt-ee-sv\n\n\n* source languages: ee\n* target languages: sv\n* OPUS readme: ee-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.472"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ee #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ee-sv\n\n\n* source languages: ee\n* target languages: sv\n* OPUS readme: ee-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.472"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ee #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ee-sv\n\n\n* source languages: ee\n* target languages: sv\n* OPUS readme: ee-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.472"
] |
translation | transformers |
### opus-mt-efi-de
* source languages: efi
* target languages: de
* OPUS readme: [efi-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.de | 21.0 | 0.401 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-efi-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"efi",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #efi #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-efi-de
* source languages: efi
* target languages: de
* OPUS readme: efi-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.0, chr-F: 0.401
| [
"### opus-mt-efi-de\n\n\n* source languages: efi\n* target languages: de\n* OPUS readme: efi-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.0, chr-F: 0.401"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-efi-de\n\n\n* source languages: efi\n* target languages: de\n* OPUS readme: efi-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.0, chr-F: 0.401"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-efi-de\n\n\n* source languages: efi\n* target languages: de\n* OPUS readme: efi-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.0, chr-F: 0.401"
] |
translation | transformers |
### opus-mt-efi-en
* source languages: efi
* target languages: en
* OPUS readme: [efi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.en | 35.4 | 0.510 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-efi-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"efi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #efi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-efi-en
* source languages: efi
* target languages: en
* OPUS readme: efi-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.4, chr-F: 0.510
| [
"### opus-mt-efi-en\n\n\n* source languages: efi\n* target languages: en\n* OPUS readme: efi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.4, chr-F: 0.510"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-efi-en\n\n\n* source languages: efi\n* target languages: en\n* OPUS readme: efi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.4, chr-F: 0.510"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-efi-en\n\n\n* source languages: efi\n* target languages: en\n* OPUS readme: efi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.4, chr-F: 0.510"
] |
translation | transformers |
### opus-mt-efi-fi
* source languages: efi
* target languages: fi
* OPUS readme: [efi-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.fi | 23.6 | 0.450 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-efi-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"efi",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #efi #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-efi-fi
* source languages: efi
* target languages: fi
* OPUS readme: efi-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.6, chr-F: 0.450
| [
"### opus-mt-efi-fi\n\n\n* source languages: efi\n* target languages: fi\n* OPUS readme: efi-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.450"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-efi-fi\n\n\n* source languages: efi\n* target languages: fi\n* OPUS readme: efi-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.450"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-efi-fi\n\n\n* source languages: efi\n* target languages: fi\n* OPUS readme: efi-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.450"
] |
translation | transformers |
### opus-mt-efi-fr
* source languages: efi
* target languages: fr
* OPUS readme: [efi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-fr/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fr/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-fr/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.fr | 25.1 | 0.419 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-efi-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"efi",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #efi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-efi-fr
* source languages: efi
* target languages: fr
* OPUS readme: efi-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.1, chr-F: 0.419
| [
"### opus-mt-efi-fr\n\n\n* source languages: efi\n* target languages: fr\n* OPUS readme: efi-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.1, chr-F: 0.419"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-efi-fr\n\n\n* source languages: efi\n* target languages: fr\n* OPUS readme: efi-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.1, chr-F: 0.419"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-efi-fr\n\n\n* source languages: efi\n* target languages: fr\n* OPUS readme: efi-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.1, chr-F: 0.419"
] |
translation | transformers |
### opus-mt-efi-sv
* source languages: efi
* target languages: sv
* OPUS readme: [efi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.sv | 26.8 | 0.447 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-efi-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"efi",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #efi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-efi-sv
* source languages: efi
* target languages: sv
* OPUS readme: efi-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.8, chr-F: 0.447
| [
"### opus-mt-efi-sv\n\n\n* source languages: efi\n* target languages: sv\n* OPUS readme: efi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.8, chr-F: 0.447"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-efi-sv\n\n\n* source languages: efi\n* target languages: sv\n* OPUS readme: efi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.8, chr-F: 0.447"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #efi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-efi-sv\n\n\n* source languages: efi\n* target languages: sv\n* OPUS readme: efi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.8, chr-F: 0.447"
] |
translation | transformers |
### ell-ara
* source group: Modern Greek (1453-)
* target group: Arabic
* OPUS readme: [ell-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md)
* model: transformer
* source language(s): ell
* target language(s): ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell.ara | 21.9 | 0.485 |
### System Info:
- hf_name: ell-ara
- source_languages: ell
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'ar']
- src_constituents: {'ell'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-ara/opus-2020-07-03.test.txt
- src_alpha3: ell
- tgt_alpha3: ara
- short_pair: el-ar
- chrF2_score: 0.485
- bleu: 21.9
- brevity_penalty: 0.972
- ref_len: 1686.0
- src_name: Modern Greek (1453-)
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: el
- tgt_alpha2: ar
- prefer_old: False
- long_pair: ell-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["el", "ar"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-el-ar | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"el",
"ar"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #el #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ell-ara
* source group: Modern Greek (1453-)
* target group: Arabic
* OPUS readme: ell-ara
* model: transformer
* source language(s): ell
* target language(s): ara arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.9, chr-F: 0.485
### System Info:
* hf\_name: ell-ara
* source\_languages: ell
* target\_languages: ara
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['el', 'ar']
* src\_constituents: {'ell'}
* tgt\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ell
* tgt\_alpha3: ara
* short\_pair: el-ar
* chrF2\_score: 0.485
* bleu: 21.9
* brevity\_penalty: 0.972
* ref\_len: 1686.0
* src\_name: Modern Greek (1453-)
* tgt\_name: Arabic
* train\_date: 2020-07-03
* src\_alpha2: el
* tgt\_alpha2: ar
* prefer\_old: False
* long\_pair: ell-ara
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ell-ara\n\n\n* source group: Modern Greek (1453-)\n* target group: Arabic\n* OPUS readme: ell-ara\n* model: transformer\n* source language(s): ell\n* target language(s): ara arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.9, chr-F: 0.485",
"### System Info:\n\n\n* hf\\_name: ell-ara\n* source\\_languages: ell\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['el', 'ar']\n* src\\_constituents: {'ell'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ell\n* tgt\\_alpha3: ara\n* short\\_pair: el-ar\n* chrF2\\_score: 0.485\n* bleu: 21.9\n* brevity\\_penalty: 0.972\n* ref\\_len: 1686.0\n* src\\_name: Modern Greek (1453-)\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: el\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: ell-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ell-ara\n\n\n* source group: Modern Greek (1453-)\n* target group: Arabic\n* OPUS readme: ell-ara\n* model: transformer\n* source language(s): ell\n* target language(s): ara arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.9, chr-F: 0.485",
"### System Info:\n\n\n* hf\\_name: ell-ara\n* source\\_languages: ell\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['el', 'ar']\n* src\\_constituents: {'ell'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ell\n* tgt\\_alpha3: ara\n* short\\_pair: el-ar\n* chrF2\\_score: 0.485\n* bleu: 21.9\n* brevity\\_penalty: 0.972\n* ref\\_len: 1686.0\n* src\\_name: Modern Greek (1453-)\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: el\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: ell-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
165,
450
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ell-ara\n\n\n* source group: Modern Greek (1453-)\n* target group: Arabic\n* OPUS readme: ell-ara\n* model: transformer\n* source language(s): ell\n* target language(s): ara arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.9, chr-F: 0.485### System Info:\n\n\n* hf\\_name: ell-ara\n* source\\_languages: ell\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['el', 'ar']\n* src\\_constituents: {'ell'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ell\n* tgt\\_alpha3: ara\n* short\\_pair: el-ar\n* chrF2\\_score: 0.485\n* bleu: 21.9\n* brevity\\_penalty: 0.972\n* ref\\_len: 1686.0\n* src\\_name: Modern Greek (1453-)\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: el\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: ell-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ell-epo
* source group: Modern Greek (1453-)
* target group: Esperanto
* OPUS readme: [ell-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-epo/README.md)
* model: transformer-align
* source language(s): ell
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell.epo | 32.4 | 0.517 |
### System Info:
- hf_name: ell-epo
- source_languages: ell
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ell-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'eo']
- src_constituents: {'ell'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ell-epo/opus-2020-06-16.test.txt
- src_alpha3: ell
- tgt_alpha3: epo
- short_pair: el-eo
- chrF2_score: 0.517
- bleu: 32.4
- brevity_penalty: 0.9790000000000001
- ref_len: 3807.0
- src_name: Modern Greek (1453-)
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: el
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ell-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["el", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-el-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"el",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #el #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ell-epo
* source group: Modern Greek (1453-)
* target group: Esperanto
* OPUS readme: ell-epo
* model: transformer-align
* source language(s): ell
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.4, chr-F: 0.517
### System Info:
* hf\_name: ell-epo
* source\_languages: ell
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['el', 'eo']
* src\_constituents: {'ell'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ell
* tgt\_alpha3: epo
* short\_pair: el-eo
* chrF2\_score: 0.517
* bleu: 32.4
* brevity\_penalty: 0.9790000000000001
* ref\_len: 3807.0
* src\_name: Modern Greek (1453-)
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: el
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: ell-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ell-epo\n\n\n* source group: Modern Greek (1453-)\n* target group: Esperanto\n* OPUS readme: ell-epo\n* model: transformer-align\n* source language(s): ell\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.4, chr-F: 0.517",
"### System Info:\n\n\n* hf\\_name: ell-epo\n* source\\_languages: ell\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['el', 'eo']\n* src\\_constituents: {'ell'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ell\n* tgt\\_alpha3: epo\n* short\\_pair: el-eo\n* chrF2\\_score: 0.517\n* bleu: 32.4\n* brevity\\_penalty: 0.9790000000000001\n* ref\\_len: 3807.0\n* src\\_name: Modern Greek (1453-)\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: el\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ell-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ell-epo\n\n\n* source group: Modern Greek (1453-)\n* target group: Esperanto\n* OPUS readme: ell-epo\n* model: transformer-align\n* source language(s): ell\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.4, chr-F: 0.517",
"### System Info:\n\n\n* hf\\_name: ell-epo\n* source\\_languages: ell\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['el', 'eo']\n* src\\_constituents: {'ell'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ell\n* tgt\\_alpha3: epo\n* short\\_pair: el-eo\n* chrF2\\_score: 0.517\n* bleu: 32.4\n* brevity\\_penalty: 0.9790000000000001\n* ref\\_len: 3807.0\n* src\\_name: Modern Greek (1453-)\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: el\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ell-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
145,
418
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ell-epo\n\n\n* source group: Modern Greek (1453-)\n* target group: Esperanto\n* OPUS readme: ell-epo\n* model: transformer-align\n* source language(s): ell\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.4, chr-F: 0.517### System Info:\n\n\n* hf\\_name: ell-epo\n* source\\_languages: ell\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['el', 'eo']\n* src\\_constituents: {'ell'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ell\n* tgt\\_alpha3: epo\n* short\\_pair: el-eo\n* chrF2\\_score: 0.517\n* bleu: 32.4\n* brevity\\_penalty: 0.9790000000000001\n* ref\\_len: 3807.0\n* src\\_name: Modern Greek (1453-)\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: el\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ell-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-el-fi
* source languages: el
* target languages: fi
* OPUS readme: [el-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.el.fi | 25.3 | 0.517 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-el-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #el #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-el-fi
* source languages: el
* target languages: fi
* OPUS readme: el-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.3, chr-F: 0.517
| [
"### opus-mt-el-fi\n\n\n* source languages: el\n* target languages: fi\n* OPUS readme: el-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.517"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-el-fi\n\n\n* source languages: el\n* target languages: fi\n* OPUS readme: el-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.517"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-el-fi\n\n\n* source languages: el\n* target languages: fi\n* OPUS readme: el-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.517"
] |
translation | transformers |
### opus-mt-el-fr
* source languages: el
* target languages: fr
* OPUS readme: [el-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.el.fr | 63.0 | 0.741 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-el-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #el #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-el-fr
* source languages: el
* target languages: fr
* OPUS readme: el-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 63.0, chr-F: 0.741
| [
"### opus-mt-el-fr\n\n\n* source languages: el\n* target languages: fr\n* OPUS readme: el-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 63.0, chr-F: 0.741"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-el-fr\n\n\n* source languages: el\n* target languages: fr\n* OPUS readme: el-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 63.0, chr-F: 0.741"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-el-fr\n\n\n* source languages: el\n* target languages: fr\n* OPUS readme: el-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 63.0, chr-F: 0.741"
] |
translation | transformers |
### opus-mt-el-sv
* source languages: el
* target languages: sv
* OPUS readme: [el-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/el-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/el-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/el-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.el.sv | 23.6 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-el-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #el #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-el-sv
* source languages: el
* target languages: sv
* OPUS readme: el-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.6, chr-F: 0.498
| [
"### opus-mt-el-sv\n\n\n* source languages: el\n* target languages: sv\n* OPUS readme: el-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.498"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-el-sv\n\n\n* source languages: el\n* target languages: sv\n* OPUS readme: el-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.498"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-el-sv\n\n\n* source languages: el\n* target languages: sv\n* OPUS readme: el-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.498"
] |
translation | transformers |
### opus-mt-en-INSULAR_CELTIC
* source languages: en
* target languages: ga,cy,br,gd,kw,gv
* OPUS readme: [en-ga+cy+br+gd+kw+gv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ga+cy+br+gd+kw+gv/README.md)
* dataset: opus+techiaith+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus+techiaith+bt-2020-04-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.zip)
* test set translations: [opus+techiaith+bt-2020-04-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.test.txt)
* test set scores: [opus+techiaith+bt-2020-04-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ga+cy+br+gd+kw+gv/opus+techiaith+bt-2020-04-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ga | 22.8 | 0.404 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-CELTIC | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"cel",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #cel #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-INSULAR\_CELTIC
* source languages: en
* target languages: ga,cy,br,gd,kw,gv
* OPUS readme: en-ga+cy+br+gd+kw+gv
* dataset: opus+techiaith+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: opus+techiaith+URL
* test set translations: opus+techiaith+URL
* test set scores: opus+techiaith+URL
Benchmarks
----------
testset: URL, BLEU: 22.8, chr-F: 0.404
| [
"### opus-mt-en-INSULAR\\_CELTIC\n\n\n* source languages: en\n* target languages: ga,cy,br,gd,kw,gv\n* OPUS readme: en-ga+cy+br+gd+kw+gv\n* dataset: opus+techiaith+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: opus+techiaith+URL\n* test set translations: opus+techiaith+URL\n* test set scores: opus+techiaith+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.404"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #cel #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-INSULAR\\_CELTIC\n\n\n* source languages: en\n* target languages: ga,cy,br,gd,kw,gv\n* OPUS readme: en-ga+cy+br+gd+kw+gv\n* dataset: opus+techiaith+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: opus+techiaith+URL\n* test set translations: opus+techiaith+URL\n* test set scores: opus+techiaith+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.404"
] | [
52,
184
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #cel #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-INSULAR\\_CELTIC\n\n\n* source languages: en\n* target languages: ga,cy,br,gd,kw,gv\n* OPUS readme: en-ga+cy+br+gd+kw+gv\n* dataset: opus+techiaith+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: opus+techiaith+URL\n* test set translations: opus+techiaith+URL\n* test set scores: opus+techiaith+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.404"
] |
translation | transformers |
### opus-mt-en-ROMANCE
* source languages: en
* target languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la
* OPUS readme: [en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-04-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.zip)
* test set translations: [opus-2020-04-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.test.txt)
* test set scores: [opus-2020-04-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la/opus-2020-04-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.la | 50.1 | 0.693 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ROMANCE | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"roa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #jax #rust #marian #text2text-generation #translation #en #roa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ROMANCE
* source languages: en
* target languages: fr,fr\_BE,fr\_CA,fr\_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es\_AR,es\_CL,es\_CO,es\_CR,es\_DO,es\_EC,es\_ES,es\_GT,es\_HN,es\_MX,es\_NI,es\_PA,es\_PE,es\_PR,es\_SV,es\_UY,es\_VE,pt,pt\_br,pt\_BR,pt\_PT,gl,lad,an,mwl,it,it\_IT,co,nap,scn,vec,sc,ro,la
* OPUS readme: en-fr+fr\_BE+fr\_CA+fr\_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es\_AR+es\_CL+es\_CO+es\_CR+es\_DO+es\_EC+es\_ES+es\_GT+es\_HN+es\_MX+es\_NI+es\_PA+es\_PE+es\_PR+es\_SV+es\_UY+es\_VE+pt+pt\_br+pt\_BR+pt\_PT+gl+lad+an+mwl+it+it\_IT+co+nap+scn+vec+sc+ro+la
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 50.1, chr-F: 0.693
| [
"### opus-mt-en-ROMANCE\n\n\n* source languages: en\n* target languages: fr,fr\\_BE,fr\\_CA,fr\\_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es\\_AR,es\\_CL,es\\_CO,es\\_CR,es\\_DO,es\\_EC,es\\_ES,es\\_GT,es\\_HN,es\\_MX,es\\_NI,es\\_PA,es\\_PE,es\\_PR,es\\_SV,es\\_UY,es\\_VE,pt,pt\\_br,pt\\_BR,pt\\_PT,gl,lad,an,mwl,it,it\\_IT,co,nap,scn,vec,sc,ro,la\n* OPUS readme: en-fr+fr\\_BE+fr\\_CA+fr\\_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es\\_AR+es\\_CL+es\\_CO+es\\_CR+es\\_DO+es\\_EC+es\\_ES+es\\_GT+es\\_HN+es\\_MX+es\\_NI+es\\_PA+es\\_PE+es\\_PR+es\\_SV+es\\_UY+es\\_VE+pt+pt\\_br+pt\\_BR+pt\\_PT+gl+lad+an+mwl+it+it\\_IT+co+nap+scn+vec+sc+ro+la\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.1, chr-F: 0.693"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #marian #text2text-generation #translation #en #roa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ROMANCE\n\n\n* source languages: en\n* target languages: fr,fr\\_BE,fr\\_CA,fr\\_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es\\_AR,es\\_CL,es\\_CO,es\\_CR,es\\_DO,es\\_EC,es\\_ES,es\\_GT,es\\_HN,es\\_MX,es\\_NI,es\\_PA,es\\_PE,es\\_PR,es\\_SV,es\\_UY,es\\_VE,pt,pt\\_br,pt\\_BR,pt\\_PT,gl,lad,an,mwl,it,it\\_IT,co,nap,scn,vec,sc,ro,la\n* OPUS readme: en-fr+fr\\_BE+fr\\_CA+fr\\_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es\\_AR+es\\_CL+es\\_CO+es\\_CR+es\\_DO+es\\_EC+es\\_ES+es\\_GT+es\\_HN+es\\_MX+es\\_NI+es\\_PA+es\\_PE+es\\_PR+es\\_SV+es\\_UY+es\\_VE+pt+pt\\_br+pt\\_BR+pt\\_PT+gl+lad+an+mwl+it+it\\_IT+co+nap+scn+vec+sc+ro+la\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.1, chr-F: 0.693"
] | [
56,
485
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #marian #text2text-generation #translation #en #roa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ROMANCE\n\n\n* source languages: en\n* target languages: fr,fr\\_BE,fr\\_CA,fr\\_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es\\_AR,es\\_CL,es\\_CO,es\\_CR,es\\_DO,es\\_EC,es\\_ES,es\\_GT,es\\_HN,es\\_MX,es\\_NI,es\\_PA,es\\_PE,es\\_PR,es\\_SV,es\\_UY,es\\_VE,pt,pt\\_br,pt\\_BR,pt\\_PT,gl,lad,an,mwl,it,it\\_IT,co,nap,scn,vec,sc,ro,la\n* OPUS readme: en-fr+fr\\_BE+fr\\_CA+fr\\_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es\\_AR+es\\_CL+es\\_CO+es\\_CR+es\\_DO+es\\_EC+es\\_ES+es\\_GT+es\\_HN+es\\_MX+es\\_NI+es\\_PA+es\\_PE+es\\_PR+es\\_SV+es\\_UY+es\\_VE+pt+pt\\_br+pt\\_BR+pt\\_PT+gl+lad+an+mwl+it+it\\_IT+co+nap+scn+vec+sc+ro+la\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.1, chr-F: 0.693"
] |
translation | transformers |
### eng-aav
* source group: English
* target group: Austro-Asiatic languages
* OPUS readme: [eng-aav](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aav/README.md)
* model: transformer
* source language(s): eng
* target language(s): hoc hoc_Latn kha khm khm_Latn mnw vie vie_Hani
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-hoc.eng.hoc | 0.1 | 0.033 |
| Tatoeba-test.eng-kha.eng.kha | 0.4 | 0.043 |
| Tatoeba-test.eng-khm.eng.khm | 0.2 | 0.242 |
| Tatoeba-test.eng-mnw.eng.mnw | 0.8 | 0.003 |
| Tatoeba-test.eng.multi | 16.1 | 0.311 |
| Tatoeba-test.eng-vie.eng.vie | 33.2 | 0.508 |
### System Info:
- hf_name: eng-aav
- source_languages: eng
- target_languages: aav
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aav/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'vi', 'km', 'aav']
- src_constituents: {'eng'}
- tgt_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie_Hani', 'khm_Latn', 'hoc_Latn', 'hoc'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aav/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: aav
- short_pair: en-aav
- chrF2_score: 0.311
- bleu: 16.1
- brevity_penalty: 1.0
- ref_len: 38261.0
- src_name: English
- tgt_name: Austro-Asiatic languages
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: aav
- prefer_old: False
- long_pair: eng-aav
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "vi", "km", "aav"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-aav | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"km",
"aav",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"vi",
"km",
"aav"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #vi #km #aav #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-aav
* source group: English
* target group: Austro-Asiatic languages
* OPUS readme: eng-aav
* model: transformer
* source language(s): eng
* target language(s): hoc hoc\_Latn kha khm khm\_Latn mnw vie vie\_Hani
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 0.1, chr-F: 0.033
testset: URL, BLEU: 0.4, chr-F: 0.043
testset: URL, BLEU: 0.2, chr-F: 0.242
testset: URL, BLEU: 0.8, chr-F: 0.003
testset: URL, BLEU: 16.1, chr-F: 0.311
testset: URL, BLEU: 33.2, chr-F: 0.508
### System Info:
* hf\_name: eng-aav
* source\_languages: eng
* target\_languages: aav
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'vi', 'km', 'aav']
* src\_constituents: {'eng'}
* tgt\_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie\_Hani', 'khm\_Latn', 'hoc\_Latn', 'hoc'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: aav
* short\_pair: en-aav
* chrF2\_score: 0.311
* bleu: 16.1
* brevity\_penalty: 1.0
* ref\_len: 38261.0
* src\_name: English
* tgt\_name: Austro-Asiatic languages
* train\_date: 2020-07-26
* src\_alpha2: en
* tgt\_alpha2: aav
* prefer\_old: False
* long\_pair: eng-aav
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-aav\n\n\n* source group: English\n* target group: Austro-Asiatic languages\n* OPUS readme: eng-aav\n* model: transformer\n* source language(s): eng\n* target language(s): hoc hoc\\_Latn kha khm khm\\_Latn mnw vie vie\\_Hani\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.1, chr-F: 0.033\ntestset: URL, BLEU: 0.4, chr-F: 0.043\ntestset: URL, BLEU: 0.2, chr-F: 0.242\ntestset: URL, BLEU: 0.8, chr-F: 0.003\ntestset: URL, BLEU: 16.1, chr-F: 0.311\ntestset: URL, BLEU: 33.2, chr-F: 0.508",
"### System Info:\n\n\n* hf\\_name: eng-aav\n* source\\_languages: eng\n* target\\_languages: aav\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'vi', 'km', 'aav']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie\\_Hani', 'khm\\_Latn', 'hoc\\_Latn', 'hoc'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: aav\n* short\\_pair: en-aav\n* chrF2\\_score: 0.311\n* bleu: 16.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 38261.0\n* src\\_name: English\n* tgt\\_name: Austro-Asiatic languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: aav\n* prefer\\_old: False\n* long\\_pair: eng-aav\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #vi #km #aav #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-aav\n\n\n* source group: English\n* target group: Austro-Asiatic languages\n* OPUS readme: eng-aav\n* model: transformer\n* source language(s): eng\n* target language(s): hoc hoc\\_Latn kha khm khm\\_Latn mnw vie vie\\_Hani\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.1, chr-F: 0.033\ntestset: URL, BLEU: 0.4, chr-F: 0.043\ntestset: URL, BLEU: 0.2, chr-F: 0.242\ntestset: URL, BLEU: 0.8, chr-F: 0.003\ntestset: URL, BLEU: 16.1, chr-F: 0.311\ntestset: URL, BLEU: 33.2, chr-F: 0.508",
"### System Info:\n\n\n* hf\\_name: eng-aav\n* source\\_languages: eng\n* target\\_languages: aav\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'vi', 'km', 'aav']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie\\_Hani', 'khm\\_Latn', 'hoc\\_Latn', 'hoc'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: aav\n* short\\_pair: en-aav\n* chrF2\\_score: 0.311\n* bleu: 16.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 38261.0\n* src\\_name: English\n* tgt\\_name: Austro-Asiatic languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: aav\n* prefer\\_old: False\n* long\\_pair: eng-aav\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
56,
297,
454
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #vi #km #aav #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-aav\n\n\n* source group: English\n* target group: Austro-Asiatic languages\n* OPUS readme: eng-aav\n* model: transformer\n* source language(s): eng\n* target language(s): hoc hoc\\_Latn kha khm khm\\_Latn mnw vie vie\\_Hani\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.1, chr-F: 0.033\ntestset: URL, BLEU: 0.4, chr-F: 0.043\ntestset: URL, BLEU: 0.2, chr-F: 0.242\ntestset: URL, BLEU: 0.8, chr-F: 0.003\ntestset: URL, BLEU: 16.1, chr-F: 0.311\ntestset: URL, BLEU: 33.2, chr-F: 0.508### System Info:\n\n\n* hf\\_name: eng-aav\n* source\\_languages: eng\n* target\\_languages: aav\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'vi', 'km', 'aav']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie\\_Hani', 'khm\\_Latn', 'hoc\\_Latn', 'hoc'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: aav\n* short\\_pair: en-aav\n* chrF2\\_score: 0.311\n* bleu: 16.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 38261.0\n* src\\_name: English\n* tgt\\_name: Austro-Asiatic languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: aav\n* prefer\\_old: False\n* long\\_pair: eng-aav\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-af
* source languages: en
* target languages: af
* OPUS readme: [en-af](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-af/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.af | 56.1 | 0.741 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-af | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"af",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-af
* source languages: en
* target languages: af
* OPUS readme: en-af
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 56.1, chr-F: 0.741
| [
"### opus-mt-en-af\n\n\n* source languages: en\n* target languages: af\n* OPUS readme: en-af\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.1, chr-F: 0.741"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-af\n\n\n* source languages: en\n* target languages: af\n* OPUS readme: en-af\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.1, chr-F: 0.741"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-af\n\n\n* source languages: en\n* target languages: af\n* OPUS readme: en-af\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.1, chr-F: 0.741"
] |
translation | transformers |
### eng-afa
* source group: English
* target group: Afro-Asiatic languages
* OPUS readme: [eng-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md)
* model: transformer
* source language(s): eng
* target language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-amh.eng.amh | 11.6 | 0.504 |
| Tatoeba-test.eng-ara.eng.ara | 12.0 | 0.404 |
| Tatoeba-test.eng-hau.eng.hau | 10.2 | 0.429 |
| Tatoeba-test.eng-heb.eng.heb | 32.3 | 0.551 |
| Tatoeba-test.eng-kab.eng.kab | 1.6 | 0.191 |
| Tatoeba-test.eng-mlt.eng.mlt | 17.7 | 0.551 |
| Tatoeba-test.eng.multi | 14.4 | 0.375 |
| Tatoeba-test.eng-rif.eng.rif | 1.7 | 0.103 |
| Tatoeba-test.eng-shy.eng.shy | 0.8 | 0.090 |
| Tatoeba-test.eng-som.eng.som | 16.0 | 0.429 |
| Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.238 |
### System Info:
- hf_name: eng-afa
- source_languages: eng
- target_languages: afa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
- src_constituents: {'eng'}
- tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: afa
- short_pair: en-afa
- chrF2_score: 0.375
- bleu: 14.4
- brevity_penalty: 1.0
- ref_len: 58110.0
- src_name: English
- tgt_name: Afro-Asiatic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: afa
- prefer_old: False
- long_pair: eng-afa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "so", "ti", "am", "he", "mt", "ar", "afa"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-afa | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #so #ti #am #he #mt #ar #afa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-afa
* source group: English
* target group: Afro-Asiatic languages
* OPUS readme: eng-afa
* model: transformer
* source language(s): eng
* target language(s): acm afb amh apc ara arq ary arz hau\_Latn heb kab mlt rif\_Latn shy\_Latn som tir
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 11.6, chr-F: 0.504
testset: URL, BLEU: 12.0, chr-F: 0.404
testset: URL, BLEU: 10.2, chr-F: 0.429
testset: URL, BLEU: 32.3, chr-F: 0.551
testset: URL, BLEU: 1.6, chr-F: 0.191
testset: URL, BLEU: 17.7, chr-F: 0.551
testset: URL, BLEU: 14.4, chr-F: 0.375
testset: URL, BLEU: 1.7, chr-F: 0.103
testset: URL, BLEU: 0.8, chr-F: 0.090
testset: URL, BLEU: 16.0, chr-F: 0.429
testset: URL, BLEU: 2.7, chr-F: 0.238
### System Info:
* hf\_name: eng-afa
* source\_languages: eng
* target\_languages: afa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
* src\_constituents: {'eng'}
* tgt\_constituents: {'som', 'rif\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\_Latn', 'acm', 'ary'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: afa
* short\_pair: en-afa
* chrF2\_score: 0.375
* bleu: 14.4
* brevity\_penalty: 1.0
* ref\_len: 58110.0
* src\_name: English
* tgt\_name: Afro-Asiatic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: afa
* prefer\_old: False
* long\_pair: eng-afa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-afa\n\n\n* source group: English\n* target group: Afro-Asiatic languages\n* OPUS readme: eng-afa\n* model: transformer\n* source language(s): eng\n* target language(s): acm afb amh apc ara arq ary arz hau\\_Latn heb kab mlt rif\\_Latn shy\\_Latn som tir\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.6, chr-F: 0.504\ntestset: URL, BLEU: 12.0, chr-F: 0.404\ntestset: URL, BLEU: 10.2, chr-F: 0.429\ntestset: URL, BLEU: 32.3, chr-F: 0.551\ntestset: URL, BLEU: 1.6, chr-F: 0.191\ntestset: URL, BLEU: 17.7, chr-F: 0.551\ntestset: URL, BLEU: 14.4, chr-F: 0.375\ntestset: URL, BLEU: 1.7, chr-F: 0.103\ntestset: URL, BLEU: 0.8, chr-F: 0.090\ntestset: URL, BLEU: 16.0, chr-F: 0.429\ntestset: URL, BLEU: 2.7, chr-F: 0.238",
"### System Info:\n\n\n* hf\\_name: eng-afa\n* source\\_languages: eng\n* target\\_languages: afa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'som', 'rif\\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\\_Latn', 'acm', 'ary'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: afa\n* short\\_pair: en-afa\n* chrF2\\_score: 0.375\n* bleu: 14.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 58110.0\n* src\\_name: English\n* tgt\\_name: Afro-Asiatic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: afa\n* prefer\\_old: False\n* long\\_pair: eng-afa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #so #ti #am #he #mt #ar #afa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-afa\n\n\n* source group: English\n* target group: Afro-Asiatic languages\n* OPUS readme: eng-afa\n* model: transformer\n* source language(s): eng\n* target language(s): acm afb amh apc ara arq ary arz hau\\_Latn heb kab mlt rif\\_Latn shy\\_Latn som tir\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.6, chr-F: 0.504\ntestset: URL, BLEU: 12.0, chr-F: 0.404\ntestset: URL, BLEU: 10.2, chr-F: 0.429\ntestset: URL, BLEU: 32.3, chr-F: 0.551\ntestset: URL, BLEU: 1.6, chr-F: 0.191\ntestset: URL, BLEU: 17.7, chr-F: 0.551\ntestset: URL, BLEU: 14.4, chr-F: 0.375\ntestset: URL, BLEU: 1.7, chr-F: 0.103\ntestset: URL, BLEU: 0.8, chr-F: 0.090\ntestset: URL, BLEU: 16.0, chr-F: 0.429\ntestset: URL, BLEU: 2.7, chr-F: 0.238",
"### System Info:\n\n\n* hf\\_name: eng-afa\n* source\\_languages: eng\n* target\\_languages: afa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'som', 'rif\\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\\_Latn', 'acm', 'ary'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: afa\n* short\\_pair: en-afa\n* chrF2\\_score: 0.375\n* bleu: 14.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 58110.0\n* src\\_name: English\n* tgt\\_name: Afro-Asiatic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: afa\n* prefer\\_old: False\n* long\\_pair: eng-afa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
64,
427,
517
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #so #ti #am #he #mt #ar #afa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-afa\n\n\n* source group: English\n* target group: Afro-Asiatic languages\n* OPUS readme: eng-afa\n* model: transformer\n* source language(s): eng\n* target language(s): acm afb amh apc ara arq ary arz hau\\_Latn heb kab mlt rif\\_Latn shy\\_Latn som tir\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.6, chr-F: 0.504\ntestset: URL, BLEU: 12.0, chr-F: 0.404\ntestset: URL, BLEU: 10.2, chr-F: 0.429\ntestset: URL, BLEU: 32.3, chr-F: 0.551\ntestset: URL, BLEU: 1.6, chr-F: 0.191\ntestset: URL, BLEU: 17.7, chr-F: 0.551\ntestset: URL, BLEU: 14.4, chr-F: 0.375\ntestset: URL, BLEU: 1.7, chr-F: 0.103\ntestset: URL, BLEU: 0.8, chr-F: 0.090\ntestset: URL, BLEU: 16.0, chr-F: 0.429\ntestset: URL, BLEU: 2.7, chr-F: 0.238### System Info:\n\n\n* hf\\_name: eng-afa\n* source\\_languages: eng\n* target\\_languages: afa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'som', 'rif\\_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy\\_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau\\_Latn', 'acm', 'ary'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: afa\n* short\\_pair: en-afa\n* chrF2\\_score: 0.375\n* bleu: 14.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 58110.0\n* src\\_name: English\n* tgt\\_name: Afro-Asiatic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: afa\n* prefer\\_old: False\n* long\\_pair: eng-afa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-alv
* source group: English
* target group: Atlantic-Congo languages
* OPUS readme: [eng-alv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md)
* model: transformer
* source language(s): eng
* target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-ewe.eng.ewe | 4.9 | 0.212 |
| Tatoeba-test.eng-ful.eng.ful | 0.6 | 0.079 |
| Tatoeba-test.eng-ibo.eng.ibo | 3.5 | 0.255 |
| Tatoeba-test.eng-kin.eng.kin | 10.5 | 0.510 |
| Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.273 |
| Tatoeba-test.eng-lug.eng.lug | 5.3 | 0.340 |
| Tatoeba-test.eng.multi | 11.4 | 0.429 |
| Tatoeba-test.eng-nya.eng.nya | 18.1 | 0.595 |
| Tatoeba-test.eng-run.eng.run | 13.9 | 0.484 |
| Tatoeba-test.eng-sag.eng.sag | 5.3 | 0.194 |
| Tatoeba-test.eng-sna.eng.sna | 26.2 | 0.623 |
| Tatoeba-test.eng-swa.eng.swa | 1.0 | 0.141 |
| Tatoeba-test.eng-toi.eng.toi | 7.0 | 0.224 |
| Tatoeba-test.eng-tso.eng.tso | 46.7 | 0.643 |
| Tatoeba-test.eng-umb.eng.umb | 7.8 | 0.359 |
| Tatoeba-test.eng-wol.eng.wol | 6.8 | 0.191 |
| Tatoeba-test.eng-xho.eng.xho | 27.1 | 0.629 |
| Tatoeba-test.eng-yor.eng.yor | 17.4 | 0.356 |
| Tatoeba-test.eng-zul.eng.zul | 34.1 | 0.729 |
### System Info:
- hf_name: eng-alv
- source_languages: eng
- target_languages: alv
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv']
- src_constituents: {'eng'}
- tgt_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: alv
- short_pair: en-alv
- chrF2_score: 0.429
- bleu: 11.4
- brevity_penalty: 1.0
- ref_len: 10603.0
- src_name: English
- tgt_name: Atlantic-Congo languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: alv
- prefer_old: False
- long_pair: eng-alv
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "alv"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-alv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"alv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"alv"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #alv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-alv
* source group: English
* target group: Atlantic-Congo languages
* OPUS readme: eng-alv
* model: transformer
* source language(s): eng
* target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\_Latn tso umb wol xho yor zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 4.9, chr-F: 0.212
testset: URL, BLEU: 0.6, chr-F: 0.079
testset: URL, BLEU: 3.5, chr-F: 0.255
testset: URL, BLEU: 10.5, chr-F: 0.510
testset: URL, BLEU: 1.1, chr-F: 0.273
testset: URL, BLEU: 5.3, chr-F: 0.340
testset: URL, BLEU: 11.4, chr-F: 0.429
testset: URL, BLEU: 18.1, chr-F: 0.595
testset: URL, BLEU: 13.9, chr-F: 0.484
testset: URL, BLEU: 5.3, chr-F: 0.194
testset: URL, BLEU: 26.2, chr-F: 0.623
testset: URL, BLEU: 1.0, chr-F: 0.141
testset: URL, BLEU: 7.0, chr-F: 0.224
testset: URL, BLEU: 46.7, chr-F: 0.643
testset: URL, BLEU: 7.8, chr-F: 0.359
testset: URL, BLEU: 6.8, chr-F: 0.191
testset: URL, BLEU: 27.1, chr-F: 0.629
testset: URL, BLEU: 17.4, chr-F: 0.356
testset: URL, BLEU: 34.1, chr-F: 0.729
### System Info:
* hf\_name: eng-alv
* source\_languages: eng
* target\_languages: alv
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv']
* src\_constituents: {'eng'}
* tgt\_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\_Latn', 'umb'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: alv
* short\_pair: en-alv
* chrF2\_score: 0.429
* bleu: 11.4
* brevity\_penalty: 1.0
* ref\_len: 10603.0
* src\_name: English
* tgt\_name: Atlantic-Congo languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: alv
* prefer\_old: False
* long\_pair: eng-alv
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-alv\n\n\n* source group: English\n* target group: Atlantic-Congo languages\n* OPUS readme: eng-alv\n* model: transformer\n* source language(s): eng\n* target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.9, chr-F: 0.212\ntestset: URL, BLEU: 0.6, chr-F: 0.079\ntestset: URL, BLEU: 3.5, chr-F: 0.255\ntestset: URL, BLEU: 10.5, chr-F: 0.510\ntestset: URL, BLEU: 1.1, chr-F: 0.273\ntestset: URL, BLEU: 5.3, chr-F: 0.340\ntestset: URL, BLEU: 11.4, chr-F: 0.429\ntestset: URL, BLEU: 18.1, chr-F: 0.595\ntestset: URL, BLEU: 13.9, chr-F: 0.484\ntestset: URL, BLEU: 5.3, chr-F: 0.194\ntestset: URL, BLEU: 26.2, chr-F: 0.623\ntestset: URL, BLEU: 1.0, chr-F: 0.141\ntestset: URL, BLEU: 7.0, chr-F: 0.224\ntestset: URL, BLEU: 46.7, chr-F: 0.643\ntestset: URL, BLEU: 7.8, chr-F: 0.359\ntestset: URL, BLEU: 6.8, chr-F: 0.191\ntestset: URL, BLEU: 27.1, chr-F: 0.629\ntestset: URL, BLEU: 17.4, chr-F: 0.356\ntestset: URL, BLEU: 34.1, chr-F: 0.729",
"### System Info:\n\n\n* hf\\_name: eng-alv\n* source\\_languages: eng\n* target\\_languages: alv\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: alv\n* short\\_pair: en-alv\n* chrF2\\_score: 0.429\n* bleu: 11.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 10603.0\n* src\\_name: English\n* tgt\\_name: Atlantic-Congo languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: alv\n* prefer\\_old: False\n* long\\_pair: eng-alv\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #alv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-alv\n\n\n* source group: English\n* target group: Atlantic-Congo languages\n* OPUS readme: eng-alv\n* model: transformer\n* source language(s): eng\n* target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.9, chr-F: 0.212\ntestset: URL, BLEU: 0.6, chr-F: 0.079\ntestset: URL, BLEU: 3.5, chr-F: 0.255\ntestset: URL, BLEU: 10.5, chr-F: 0.510\ntestset: URL, BLEU: 1.1, chr-F: 0.273\ntestset: URL, BLEU: 5.3, chr-F: 0.340\ntestset: URL, BLEU: 11.4, chr-F: 0.429\ntestset: URL, BLEU: 18.1, chr-F: 0.595\ntestset: URL, BLEU: 13.9, chr-F: 0.484\ntestset: URL, BLEU: 5.3, chr-F: 0.194\ntestset: URL, BLEU: 26.2, chr-F: 0.623\ntestset: URL, BLEU: 1.0, chr-F: 0.141\ntestset: URL, BLEU: 7.0, chr-F: 0.224\ntestset: URL, BLEU: 46.7, chr-F: 0.643\ntestset: URL, BLEU: 7.8, chr-F: 0.359\ntestset: URL, BLEU: 6.8, chr-F: 0.191\ntestset: URL, BLEU: 27.1, chr-F: 0.629\ntestset: URL, BLEU: 17.4, chr-F: 0.356\ntestset: URL, BLEU: 34.1, chr-F: 0.729",
"### System Info:\n\n\n* hf\\_name: eng-alv\n* source\\_languages: eng\n* target\\_languages: alv\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: alv\n* short\\_pair: en-alv\n* chrF2\\_score: 0.429\n* bleu: 11.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 10603.0\n* src\\_name: English\n* tgt\\_name: Atlantic-Congo languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: alv\n* prefer\\_old: False\n* long\\_pair: eng-alv\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
86,
602,
556
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #alv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-alv\n\n\n* source group: English\n* target group: Atlantic-Congo languages\n* OPUS readme: eng-alv\n* model: transformer\n* source language(s): eng\n* target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.9, chr-F: 0.212\ntestset: URL, BLEU: 0.6, chr-F: 0.079\ntestset: URL, BLEU: 3.5, chr-F: 0.255\ntestset: URL, BLEU: 10.5, chr-F: 0.510\ntestset: URL, BLEU: 1.1, chr-F: 0.273\ntestset: URL, BLEU: 5.3, chr-F: 0.340\ntestset: URL, BLEU: 11.4, chr-F: 0.429\ntestset: URL, BLEU: 18.1, chr-F: 0.595\ntestset: URL, BLEU: 13.9, chr-F: 0.484\ntestset: URL, BLEU: 5.3, chr-F: 0.194\ntestset: URL, BLEU: 26.2, chr-F: 0.623\ntestset: URL, BLEU: 1.0, chr-F: 0.141\ntestset: URL, BLEU: 7.0, chr-F: 0.224\ntestset: URL, BLEU: 46.7, chr-F: 0.643\ntestset: URL, BLEU: 7.8, chr-F: 0.359\ntestset: URL, BLEU: 6.8, chr-F: 0.191\ntestset: URL, BLEU: 27.1, chr-F: 0.629\ntestset: URL, BLEU: 17.4, chr-F: 0.356\ntestset: URL, BLEU: 34.1, chr-F: 0.729### System Info:\n\n\n* hf\\_name: eng-alv\n* source\\_languages: eng\n* target\\_languages: alv\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: alv\n* short\\_pair: en-alv\n* chrF2\\_score: 0.429\n* bleu: 11.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 10603.0\n* src\\_name: English\n* tgt\\_name: Atlantic-Congo languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: alv\n* prefer\\_old: False\n* long\\_pair: eng-alv\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-ara
* source group: English
* target group: Arabic
* OPUS readme: [eng-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md)
* model: transformer
* source language(s): eng
* target language(s): acm afb apc apc_Latn ara ara_Latn arq arq_Latn ary arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.ara | 14.0 | 0.437 |
### System Info:
- hf_name: eng-ara
- source_languages: eng
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ar']
- src_constituents: {'eng'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ara/opus-2020-07-03.test.txt
- src_alpha3: eng
- tgt_alpha3: ara
- short_pair: en-ar
- chrF2_score: 0.43700000000000006
- bleu: 14.0
- brevity_penalty: 1.0
- ref_len: 58935.0
- src_name: English
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: en
- tgt_alpha2: ar
- prefer_old: False
- long_pair: eng-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "ar"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ar | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ar"
] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-ara
* source group: English
* target group: Arabic
* OPUS readme: eng-ara
* model: transformer
* source language(s): eng
* target language(s): acm afb apc apc\_Latn ara ara\_Latn arq arq\_Latn ary arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 14.0, chr-F: 0.437
### System Info:
* hf\_name: eng-ara
* source\_languages: eng
* target\_languages: ara
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'ar']
* src\_constituents: {'eng'}
* tgt\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: ara
* short\_pair: en-ar
* chrF2\_score: 0.43700000000000006
* bleu: 14.0
* brevity\_penalty: 1.0
* ref\_len: 58935.0
* src\_name: English
* tgt\_name: Arabic
* train\_date: 2020-07-03
* src\_alpha2: en
* tgt\_alpha2: ar
* prefer\_old: False
* long\_pair: eng-ara
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-ara\n\n\n* source group: English\n* target group: Arabic\n* OPUS readme: eng-ara\n* model: transformer\n* source language(s): eng\n* target language(s): acm afb apc apc\\_Latn ara ara\\_Latn arq arq\\_Latn ary arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.0, chr-F: 0.437",
"### System Info:\n\n\n* hf\\_name: eng-ara\n* source\\_languages: eng\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ar']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: ara\n* short\\_pair: en-ar\n* chrF2\\_score: 0.43700000000000006\n* bleu: 14.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 58935.0\n* src\\_name: English\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: en\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: eng-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-ara\n\n\n* source group: English\n* target group: Arabic\n* OPUS readme: eng-ara\n* model: transformer\n* source language(s): eng\n* target language(s): acm afb apc apc\\_Latn ara ara\\_Latn arq arq\\_Latn ary arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.0, chr-F: 0.437",
"### System Info:\n\n\n* hf\\_name: eng-ara\n* source\\_languages: eng\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ar']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: ara\n* short\\_pair: en-ar\n* chrF2\\_score: 0.43700000000000006\n* bleu: 14.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 58935.0\n* src\\_name: English\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: en\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: eng-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
53,
185,
446
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-ara\n\n\n* source group: English\n* target group: Arabic\n* OPUS readme: eng-ara\n* model: transformer\n* source language(s): eng\n* target language(s): acm afb apc apc\\_Latn ara ara\\_Latn arq arq\\_Latn ary arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.0, chr-F: 0.437### System Info:\n\n\n* hf\\_name: eng-ara\n* source\\_languages: eng\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ar']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: ara\n* short\\_pair: en-ar\n* chrF2\\_score: 0.43700000000000006\n* bleu: 14.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 58935.0\n* src\\_name: English\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: en\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: eng-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-aze
* source group: English
* target group: Azerbaijani
* OPUS readme: [eng-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): aze_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.aze | 18.6 | 0.477 |
### System Info:
- hf_name: eng-aze
- source_languages: eng
- target_languages: aze
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-aze/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'az']
- src_constituents: {'eng'}
- tgt_constituents: {'aze_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-aze/opus-2020-06-16.test.txt
- src_alpha3: eng
- tgt_alpha3: aze
- short_pair: en-az
- chrF2_score: 0.47700000000000004
- bleu: 18.6
- brevity_penalty: 1.0
- ref_len: 13012.0
- src_name: English
- tgt_name: Azerbaijani
- train_date: 2020-06-16
- src_alpha2: en
- tgt_alpha2: az
- prefer_old: False
- long_pair: eng-aze
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "az"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-az | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"az",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"az"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #az #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-aze
* source group: English
* target group: Azerbaijani
* OPUS readme: eng-aze
* model: transformer-align
* source language(s): eng
* target language(s): aze\_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 18.6, chr-F: 0.477
### System Info:
* hf\_name: eng-aze
* source\_languages: eng
* target\_languages: aze
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'az']
* src\_constituents: {'eng'}
* tgt\_constituents: {'aze\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: aze
* short\_pair: en-az
* chrF2\_score: 0.47700000000000004
* bleu: 18.6
* brevity\_penalty: 1.0
* ref\_len: 13012.0
* src\_name: English
* tgt\_name: Azerbaijani
* train\_date: 2020-06-16
* src\_alpha2: en
* tgt\_alpha2: az
* prefer\_old: False
* long\_pair: eng-aze
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-aze\n\n\n* source group: English\n* target group: Azerbaijani\n* OPUS readme: eng-aze\n* model: transformer-align\n* source language(s): eng\n* target language(s): aze\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.6, chr-F: 0.477",
"### System Info:\n\n\n* hf\\_name: eng-aze\n* source\\_languages: eng\n* target\\_languages: aze\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'az']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'aze\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: aze\n* short\\_pair: en-az\n* chrF2\\_score: 0.47700000000000004\n* bleu: 18.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 13012.0\n* src\\_name: English\n* tgt\\_name: Azerbaijani\n* train\\_date: 2020-06-16\n* src\\_alpha2: en\n* tgt\\_alpha2: az\n* prefer\\_old: False\n* long\\_pair: eng-aze\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #az #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-aze\n\n\n* source group: English\n* target group: Azerbaijani\n* OPUS readme: eng-aze\n* model: transformer-align\n* source language(s): eng\n* target language(s): aze\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.6, chr-F: 0.477",
"### System Info:\n\n\n* hf\\_name: eng-aze\n* source\\_languages: eng\n* target\\_languages: aze\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'az']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'aze\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: aze\n* short\\_pair: en-az\n* chrF2\\_score: 0.47700000000000004\n* bleu: 18.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 13012.0\n* src\\_name: English\n* tgt\\_name: Azerbaijani\n* train\\_date: 2020-06-16\n* src\\_alpha2: en\n* tgt\\_alpha2: az\n* prefer\\_old: False\n* long\\_pair: eng-aze\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
139,
407
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #az #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-aze\n\n\n* source group: English\n* target group: Azerbaijani\n* OPUS readme: eng-aze\n* model: transformer-align\n* source language(s): eng\n* target language(s): aze\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.6, chr-F: 0.477### System Info:\n\n\n* hf\\_name: eng-aze\n* source\\_languages: eng\n* target\\_languages: aze\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'az']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'aze\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: aze\n* short\\_pair: en-az\n* chrF2\\_score: 0.47700000000000004\n* bleu: 18.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 13012.0\n* src\\_name: English\n* tgt\\_name: Azerbaijani\n* train\\_date: 2020-06-16\n* src\\_alpha2: en\n* tgt\\_alpha2: az\n* prefer\\_old: False\n* long\\_pair: eng-aze\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-bat
* source group: English
* target group: Baltic languages
* OPUS readme: [eng-bat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bat/README.md)
* model: transformer
* source language(s): eng
* target language(s): lav lit ltg prg_Latn sgs
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2017-enlv-englav.eng.lav | 24.0 | 0.546 |
| newsdev2019-enlt-englit.eng.lit | 20.9 | 0.533 |
| newstest2017-enlv-englav.eng.lav | 18.3 | 0.506 |
| newstest2019-enlt-englit.eng.lit | 13.6 | 0.466 |
| Tatoeba-test.eng-lav.eng.lav | 42.8 | 0.652 |
| Tatoeba-test.eng-lit.eng.lit | 37.1 | 0.650 |
| Tatoeba-test.eng.multi | 37.0 | 0.616 |
| Tatoeba-test.eng-prg.eng.prg | 0.5 | 0.130 |
| Tatoeba-test.eng-sgs.eng.sgs | 4.1 | 0.178 |
### System Info:
- hf_name: eng-bat
- source_languages: eng
- target_languages: bat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'lt', 'lv', 'bat']
- src_constituents: {'eng'}
- tgt_constituents: {'lit', 'lav', 'prg_Latn', 'ltg', 'sgs'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bat/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: bat
- short_pair: en-bat
- chrF2_score: 0.616
- bleu: 37.0
- brevity_penalty: 0.956
- ref_len: 26417.0
- src_name: English
- tgt_name: Baltic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: bat
- prefer_old: False
- long_pair: eng-bat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "lt", "lv", "bat"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-bat | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lt",
"lv",
"bat",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"lt",
"lv",
"bat"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #lt #lv #bat #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-bat
* source group: English
* target group: Baltic languages
* OPUS readme: eng-bat
* model: transformer
* source language(s): eng
* target language(s): lav lit ltg prg\_Latn sgs
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.0, chr-F: 0.546
testset: URL, BLEU: 20.9, chr-F: 0.533
testset: URL, BLEU: 18.3, chr-F: 0.506
testset: URL, BLEU: 13.6, chr-F: 0.466
testset: URL, BLEU: 42.8, chr-F: 0.652
testset: URL, BLEU: 37.1, chr-F: 0.650
testset: URL, BLEU: 37.0, chr-F: 0.616
testset: URL, BLEU: 0.5, chr-F: 0.130
testset: URL, BLEU: 4.1, chr-F: 0.178
### System Info:
* hf\_name: eng-bat
* source\_languages: eng
* target\_languages: bat
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'lt', 'lv', 'bat']
* src\_constituents: {'eng'}
* tgt\_constituents: {'lit', 'lav', 'prg\_Latn', 'ltg', 'sgs'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: bat
* short\_pair: en-bat
* chrF2\_score: 0.616
* bleu: 37.0
* brevity\_penalty: 0.956
* ref\_len: 26417.0
* src\_name: English
* tgt\_name: Baltic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: bat
* prefer\_old: False
* long\_pair: eng-bat
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-bat\n\n\n* source group: English\n* target group: Baltic languages\n* OPUS readme: eng-bat\n* model: transformer\n* source language(s): eng\n* target language(s): lav lit ltg prg\\_Latn sgs\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.546\ntestset: URL, BLEU: 20.9, chr-F: 0.533\ntestset: URL, BLEU: 18.3, chr-F: 0.506\ntestset: URL, BLEU: 13.6, chr-F: 0.466\ntestset: URL, BLEU: 42.8, chr-F: 0.652\ntestset: URL, BLEU: 37.1, chr-F: 0.650\ntestset: URL, BLEU: 37.0, chr-F: 0.616\ntestset: URL, BLEU: 0.5, chr-F: 0.130\ntestset: URL, BLEU: 4.1, chr-F: 0.178",
"### System Info:\n\n\n* hf\\_name: eng-bat\n* source\\_languages: eng\n* target\\_languages: bat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'lt', 'lv', 'bat']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'lit', 'lav', 'prg\\_Latn', 'ltg', 'sgs'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: bat\n* short\\_pair: en-bat\n* chrF2\\_score: 0.616\n* bleu: 37.0\n* brevity\\_penalty: 0.956\n* ref\\_len: 26417.0\n* src\\_name: English\n* tgt\\_name: Baltic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: bat\n* prefer\\_old: False\n* long\\_pair: eng-bat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lt #lv #bat #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-bat\n\n\n* source group: English\n* target group: Baltic languages\n* OPUS readme: eng-bat\n* model: transformer\n* source language(s): eng\n* target language(s): lav lit ltg prg\\_Latn sgs\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.546\ntestset: URL, BLEU: 20.9, chr-F: 0.533\ntestset: URL, BLEU: 18.3, chr-F: 0.506\ntestset: URL, BLEU: 13.6, chr-F: 0.466\ntestset: URL, BLEU: 42.8, chr-F: 0.652\ntestset: URL, BLEU: 37.1, chr-F: 0.650\ntestset: URL, BLEU: 37.0, chr-F: 0.616\ntestset: URL, BLEU: 0.5, chr-F: 0.130\ntestset: URL, BLEU: 4.1, chr-F: 0.178",
"### System Info:\n\n\n* hf\\_name: eng-bat\n* source\\_languages: eng\n* target\\_languages: bat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'lt', 'lv', 'bat']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'lit', 'lav', 'prg\\_Latn', 'ltg', 'sgs'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: bat\n* short\\_pair: en-bat\n* chrF2\\_score: 0.616\n* bleu: 37.0\n* brevity\\_penalty: 0.956\n* ref\\_len: 26417.0\n* src\\_name: English\n* tgt\\_name: Baltic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: bat\n* prefer\\_old: False\n* long\\_pair: eng-bat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
56,
349,
426
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lt #lv #bat #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-bat\n\n\n* source group: English\n* target group: Baltic languages\n* OPUS readme: eng-bat\n* model: transformer\n* source language(s): eng\n* target language(s): lav lit ltg prg\\_Latn sgs\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.546\ntestset: URL, BLEU: 20.9, chr-F: 0.533\ntestset: URL, BLEU: 18.3, chr-F: 0.506\ntestset: URL, BLEU: 13.6, chr-F: 0.466\ntestset: URL, BLEU: 42.8, chr-F: 0.652\ntestset: URL, BLEU: 37.1, chr-F: 0.650\ntestset: URL, BLEU: 37.0, chr-F: 0.616\ntestset: URL, BLEU: 0.5, chr-F: 0.130\ntestset: URL, BLEU: 4.1, chr-F: 0.178### System Info:\n\n\n* hf\\_name: eng-bat\n* source\\_languages: eng\n* target\\_languages: bat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'lt', 'lv', 'bat']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'lit', 'lav', 'prg\\_Latn', 'ltg', 'sgs'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: bat\n* short\\_pair: en-bat\n* chrF2\\_score: 0.616\n* bleu: 37.0\n* brevity\\_penalty: 0.956\n* ref\\_len: 26417.0\n* src\\_name: English\n* tgt\\_name: Baltic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: bat\n* prefer\\_old: False\n* long\\_pair: eng-bat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-bcl
* source languages: en
* target languages: bcl
* OPUS readme: [en-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bcl/README.md)
* dataset: opus+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.zip)
* test set translations: [opus+bt-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.test.txt)
* test set scores: [opus+bt-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bcl/opus+bt-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.bcl | 54.3 | 0.722 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-bcl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bcl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-bcl
* source languages: en
* target languages: bcl
* OPUS readme: en-bcl
* dataset: opus+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: opus+URL
* test set translations: opus+URL
* test set scores: opus+URL
Benchmarks
----------
testset: URL, BLEU: 54.3, chr-F: 0.722
| [
"### opus-mt-en-bcl\n\n\n* source languages: en\n* target languages: bcl\n* OPUS readme: en-bcl\n* dataset: opus+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.3, chr-F: 0.722"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-bcl\n\n\n* source languages: en\n* target languages: bcl\n* OPUS readme: en-bcl\n* dataset: opus+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.3, chr-F: 0.722"
] | [
52,
117
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-bcl\n\n\n* source languages: en\n* target languages: bcl\n* OPUS readme: en-bcl\n* dataset: opus+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.3, chr-F: 0.722"
] |
translation | transformers |
### opus-mt-en-bem
* source languages: en
* target languages: bem
* OPUS readme: [en-bem](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bem/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bem/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.bem | 29.7 | 0.532 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-bem | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bem",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #bem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-bem
* source languages: en
* target languages: bem
* OPUS readme: en-bem
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.7, chr-F: 0.532
| [
"### opus-mt-en-bem\n\n\n* source languages: en\n* target languages: bem\n* OPUS readme: en-bem\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.532"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-bem\n\n\n* source languages: en\n* target languages: bem\n* OPUS readme: en-bem\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.532"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-bem\n\n\n* source languages: en\n* target languages: bem\n* OPUS readme: en-bem\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.532"
] |
translation | transformers |
### opus-mt-en-ber
* source languages: en
* target languages: ber
* OPUS readme: [en-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ber/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ber/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ber | 29.7 | 0.544 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ber | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ber",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ber #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ber
* source languages: en
* target languages: ber
* OPUS readme: en-ber
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.7, chr-F: 0.544
| [
"### opus-mt-en-ber\n\n\n* source languages: en\n* target languages: ber\n* OPUS readme: en-ber\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.544"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ber #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ber\n\n\n* source languages: en\n* target languages: ber\n* OPUS readme: en-ber\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.544"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ber #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ber\n\n\n* source languages: en\n* target languages: ber\n* OPUS readme: en-ber\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.544"
] |
translation | transformers |
### eng-bul
* source group: English
* target group: Bulgarian
* OPUS readme: [eng-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md)
* model: transformer
* source language(s): eng
* target language(s): bul bul_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.bul | 50.6 | 0.680 |
### System Info:
- hf_name: eng-bul
- source_languages: eng
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'bg']
- src_constituents: {'eng'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bul/opus-2020-07-03.test.txt
- src_alpha3: eng
- tgt_alpha3: bul
- short_pair: en-bg
- chrF2_score: 0.68
- bleu: 50.6
- brevity_penalty: 0.96
- ref_len: 69504.0
- src_name: English
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: en
- tgt_alpha2: bg
- prefer_old: False
- long_pair: eng-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "bg"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-bg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"bg"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-bul
* source group: English
* target group: Bulgarian
* OPUS readme: eng-bul
* model: transformer
* source language(s): eng
* target language(s): bul bul\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 50.6, chr-F: 0.680
### System Info:
* hf\_name: eng-bul
* source\_languages: eng
* target\_languages: bul
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'bg']
* src\_constituents: {'eng'}
* tgt\_constituents: {'bul', 'bul\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: bul
* short\_pair: en-bg
* chrF2\_score: 0.68
* bleu: 50.6
* brevity\_penalty: 0.96
* ref\_len: 69504.0
* src\_name: English
* tgt\_name: Bulgarian
* train\_date: 2020-07-03
* src\_alpha2: en
* tgt\_alpha2: bg
* prefer\_old: False
* long\_pair: eng-bul
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-bul\n\n\n* source group: English\n* target group: Bulgarian\n* OPUS readme: eng-bul\n* model: transformer\n* source language(s): eng\n* target language(s): bul bul\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.6, chr-F: 0.680",
"### System Info:\n\n\n* hf\\_name: eng-bul\n* source\\_languages: eng\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'bg']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: bul\n* short\\_pair: en-bg\n* chrF2\\_score: 0.68\n* bleu: 50.6\n* brevity\\_penalty: 0.96\n* ref\\_len: 69504.0\n* src\\_name: English\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: en\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: eng-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-bul\n\n\n* source group: English\n* target group: Bulgarian\n* OPUS readme: eng-bul\n* model: transformer\n* source language(s): eng\n* target language(s): bul bul\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.6, chr-F: 0.680",
"### System Info:\n\n\n* hf\\_name: eng-bul\n* source\\_languages: eng\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'bg']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: bul\n* short\\_pair: en-bg\n* chrF2\\_score: 0.68\n* bleu: 50.6\n* brevity\\_penalty: 0.96\n* ref\\_len: 69504.0\n* src\\_name: English\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: en\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: eng-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
163,
408
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-bul\n\n\n* source group: English\n* target group: Bulgarian\n* OPUS readme: eng-bul\n* model: transformer\n* source language(s): eng\n* target language(s): bul bul\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.6, chr-F: 0.680### System Info:\n\n\n* hf\\_name: eng-bul\n* source\\_languages: eng\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'bg']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: bul\n* short\\_pair: en-bg\n* chrF2\\_score: 0.68\n* bleu: 50.6\n* brevity\\_penalty: 0.96\n* ref\\_len: 69504.0\n* src\\_name: English\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: en\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: eng-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-bi
* source languages: en
* target languages: bi
* OPUS readme: [en-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.bi | 36.4 | 0.543 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-bi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #bi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-bi
* source languages: en
* target languages: bi
* OPUS readme: en-bi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.4, chr-F: 0.543
| [
"### opus-mt-en-bi\n\n\n* source languages: en\n* target languages: bi\n* OPUS readme: en-bi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.4, chr-F: 0.543"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-bi\n\n\n* source languages: en\n* target languages: bi\n* OPUS readme: en-bi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.4, chr-F: 0.543"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-bi\n\n\n* source languages: en\n* target languages: bi\n* OPUS readme: en-bi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.4, chr-F: 0.543"
] |
translation | transformers |
### eng-bnt
* source group: English
* target group: Bantu languages
* OPUS readme: [eng-bnt](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md)
* model: transformer
* source language(s): eng
* target language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-kin.eng.kin | 12.5 | 0.519 |
| Tatoeba-test.eng-lin.eng.lin | 1.1 | 0.277 |
| Tatoeba-test.eng-lug.eng.lug | 4.8 | 0.415 |
| Tatoeba-test.eng.multi | 12.1 | 0.449 |
| Tatoeba-test.eng-nya.eng.nya | 22.1 | 0.616 |
| Tatoeba-test.eng-run.eng.run | 13.2 | 0.492 |
| Tatoeba-test.eng-sna.eng.sna | 32.1 | 0.669 |
| Tatoeba-test.eng-swa.eng.swa | 1.7 | 0.180 |
| Tatoeba-test.eng-toi.eng.toi | 10.7 | 0.266 |
| Tatoeba-test.eng-tso.eng.tso | 26.9 | 0.631 |
| Tatoeba-test.eng-umb.eng.umb | 5.2 | 0.295 |
| Tatoeba-test.eng-xho.eng.xho | 22.6 | 0.615 |
| Tatoeba-test.eng-zul.eng.zul | 41.1 | 0.769 |
### System Info:
- hf_name: eng-bnt
- source_languages: eng
- target_languages: bnt
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-bnt/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt']
- src_constituents: {'eng'}
- tgt_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-bnt/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: bnt
- short_pair: en-bnt
- chrF2_score: 0.449
- bleu: 12.1
- brevity_penalty: 1.0
- ref_len: 9989.0
- src_name: English
- tgt_name: Bantu languages
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: bnt
- prefer_old: False
- long_pair: eng-bnt
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "sn", "zu", "rw", "lg", "ts", "ln", "ny", "xh", "rn", "bnt"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-bnt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sn",
"zu",
"rw",
"lg",
"ts",
"ln",
"ny",
"xh",
"rn",
"bnt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"sn",
"zu",
"rw",
"lg",
"ts",
"ln",
"ny",
"xh",
"rn",
"bnt"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #zu #rw #lg #ts #ln #ny #xh #rn #bnt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-bnt
* source group: English
* target group: Bantu languages
* OPUS readme: eng-bnt
* model: transformer
* source language(s): eng
* target language(s): kin lin lug nya run sna swh toi\_Latn tso umb xho zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 12.5, chr-F: 0.519
testset: URL, BLEU: 1.1, chr-F: 0.277
testset: URL, BLEU: 4.8, chr-F: 0.415
testset: URL, BLEU: 12.1, chr-F: 0.449
testset: URL, BLEU: 22.1, chr-F: 0.616
testset: URL, BLEU: 13.2, chr-F: 0.492
testset: URL, BLEU: 32.1, chr-F: 0.669
testset: URL, BLEU: 1.7, chr-F: 0.180
testset: URL, BLEU: 10.7, chr-F: 0.266
testset: URL, BLEU: 26.9, chr-F: 0.631
testset: URL, BLEU: 5.2, chr-F: 0.295
testset: URL, BLEU: 22.6, chr-F: 0.615
testset: URL, BLEU: 41.1, chr-F: 0.769
### System Info:
* hf\_name: eng-bnt
* source\_languages: eng
* target\_languages: bnt
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt']
* src\_constituents: {'eng'}
* tgt\_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi\_Latn', 'umb'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: bnt
* short\_pair: en-bnt
* chrF2\_score: 0.449
* bleu: 12.1
* brevity\_penalty: 1.0
* ref\_len: 9989.0
* src\_name: English
* tgt\_name: Bantu languages
* train\_date: 2020-07-26
* src\_alpha2: en
* tgt\_alpha2: bnt
* prefer\_old: False
* long\_pair: eng-bnt
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-bnt\n\n\n* source group: English\n* target group: Bantu languages\n* OPUS readme: eng-bnt\n* model: transformer\n* source language(s): eng\n* target language(s): kin lin lug nya run sna swh toi\\_Latn tso umb xho zul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.5, chr-F: 0.519\ntestset: URL, BLEU: 1.1, chr-F: 0.277\ntestset: URL, BLEU: 4.8, chr-F: 0.415\ntestset: URL, BLEU: 12.1, chr-F: 0.449\ntestset: URL, BLEU: 22.1, chr-F: 0.616\ntestset: URL, BLEU: 13.2, chr-F: 0.492\ntestset: URL, BLEU: 32.1, chr-F: 0.669\ntestset: URL, BLEU: 1.7, chr-F: 0.180\ntestset: URL, BLEU: 10.7, chr-F: 0.266\ntestset: URL, BLEU: 26.9, chr-F: 0.631\ntestset: URL, BLEU: 5.2, chr-F: 0.295\ntestset: URL, BLEU: 22.6, chr-F: 0.615\ntestset: URL, BLEU: 41.1, chr-F: 0.769",
"### System Info:\n\n\n* hf\\_name: eng-bnt\n* source\\_languages: eng\n* target\\_languages: bnt\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi\\_Latn', 'umb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: bnt\n* short\\_pair: en-bnt\n* chrF2\\_score: 0.449\n* bleu: 12.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 9989.0\n* src\\_name: English\n* tgt\\_name: Bantu languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: bnt\n* prefer\\_old: False\n* long\\_pair: eng-bnt\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #zu #rw #lg #ts #ln #ny #xh #rn #bnt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-bnt\n\n\n* source group: English\n* target group: Bantu languages\n* OPUS readme: eng-bnt\n* model: transformer\n* source language(s): eng\n* target language(s): kin lin lug nya run sna swh toi\\_Latn tso umb xho zul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.5, chr-F: 0.519\ntestset: URL, BLEU: 1.1, chr-F: 0.277\ntestset: URL, BLEU: 4.8, chr-F: 0.415\ntestset: URL, BLEU: 12.1, chr-F: 0.449\ntestset: URL, BLEU: 22.1, chr-F: 0.616\ntestset: URL, BLEU: 13.2, chr-F: 0.492\ntestset: URL, BLEU: 32.1, chr-F: 0.669\ntestset: URL, BLEU: 1.7, chr-F: 0.180\ntestset: URL, BLEU: 10.7, chr-F: 0.266\ntestset: URL, BLEU: 26.9, chr-F: 0.631\ntestset: URL, BLEU: 5.2, chr-F: 0.295\ntestset: URL, BLEU: 22.6, chr-F: 0.615\ntestset: URL, BLEU: 41.1, chr-F: 0.769",
"### System Info:\n\n\n* hf\\_name: eng-bnt\n* source\\_languages: eng\n* target\\_languages: bnt\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi\\_Latn', 'umb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: bnt\n* short\\_pair: en-bnt\n* chrF2\\_score: 0.449\n* bleu: 12.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 9989.0\n* src\\_name: English\n* tgt\\_name: Bantu languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: bnt\n* prefer\\_old: False\n* long\\_pair: eng-bnt\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
75,
454,
499
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #zu #rw #lg #ts #ln #ny #xh #rn #bnt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-bnt\n\n\n* source group: English\n* target group: Bantu languages\n* OPUS readme: eng-bnt\n* model: transformer\n* source language(s): eng\n* target language(s): kin lin lug nya run sna swh toi\\_Latn tso umb xho zul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.5, chr-F: 0.519\ntestset: URL, BLEU: 1.1, chr-F: 0.277\ntestset: URL, BLEU: 4.8, chr-F: 0.415\ntestset: URL, BLEU: 12.1, chr-F: 0.449\ntestset: URL, BLEU: 22.1, chr-F: 0.616\ntestset: URL, BLEU: 13.2, chr-F: 0.492\ntestset: URL, BLEU: 32.1, chr-F: 0.669\ntestset: URL, BLEU: 1.7, chr-F: 0.180\ntestset: URL, BLEU: 10.7, chr-F: 0.266\ntestset: URL, BLEU: 26.9, chr-F: 0.631\ntestset: URL, BLEU: 5.2, chr-F: 0.295\ntestset: URL, BLEU: 22.6, chr-F: 0.615\ntestset: URL, BLEU: 41.1, chr-F: 0.769### System Info:\n\n\n* hf\\_name: eng-bnt\n* source\\_languages: eng\n* target\\_languages: bnt\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi\\_Latn', 'umb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: bnt\n* short\\_pair: en-bnt\n* chrF2\\_score: 0.449\n* bleu: 12.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 9989.0\n* src\\_name: English\n* tgt\\_name: Bantu languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: bnt\n* prefer\\_old: False\n* long\\_pair: eng-bnt\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-bzs
* source languages: en
* target languages: bzs
* OPUS readme: [en-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-bzs/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.bzs | 43.4 | 0.612 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-bzs | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bzs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #bzs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-bzs
* source languages: en
* target languages: bzs
* OPUS readme: en-bzs
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 43.4, chr-F: 0.612
| [
"### opus-mt-en-bzs\n\n\n* source languages: en\n* target languages: bzs\n* OPUS readme: en-bzs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.612"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bzs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-bzs\n\n\n* source languages: en\n* target languages: bzs\n* OPUS readme: en-bzs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.612"
] | [
53,
112
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bzs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-bzs\n\n\n* source languages: en\n* target languages: bzs\n* OPUS readme: en-bzs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.612"
] |
translation | transformers |
### opus-mt-en-ca
* source languages: en
* target languages: ca
* OPUS readme: [en-ca](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ca/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ca | 47.2 | 0.665 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ca | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ca
* source languages: en
* target languages: ca
* OPUS readme: en-ca
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 47.2, chr-F: 0.665
| [
"### opus-mt-en-ca\n\n\n* source languages: en\n* target languages: ca\n* OPUS readme: en-ca\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.2, chr-F: 0.665"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ca\n\n\n* source languages: en\n* target languages: ca\n* OPUS readme: en-ca\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.2, chr-F: 0.665"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ca\n\n\n* source languages: en\n* target languages: ca\n* OPUS readme: en-ca\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.2, chr-F: 0.665"
] |
translation | transformers |
### opus-mt-en-ceb
* source languages: en
* target languages: ceb
* OPUS readme: [en-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ceb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ceb/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ceb | 51.3 | 0.704 |
| Tatoeba.en.ceb | 31.3 | 0.600 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ceb | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ceb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ceb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ceb
* source languages: en
* target languages: ceb
* OPUS readme: en-ceb
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 51.3, chr-F: 0.704
testset: URL, BLEU: 31.3, chr-F: 0.600
| [
"### opus-mt-en-ceb\n\n\n* source languages: en\n* target languages: ceb\n* OPUS readme: en-ceb\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.3, chr-F: 0.704\ntestset: URL, BLEU: 31.3, chr-F: 0.600"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ceb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ceb\n\n\n* source languages: en\n* target languages: ceb\n* OPUS readme: en-ceb\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.3, chr-F: 0.704\ntestset: URL, BLEU: 31.3, chr-F: 0.600"
] | [
52,
131
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ceb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ceb\n\n\n* source languages: en\n* target languages: ceb\n* OPUS readme: en-ceb\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.3, chr-F: 0.704\ntestset: URL, BLEU: 31.3, chr-F: 0.600"
] |
translation | transformers |
### eng-cel
* source group: English
* target group: Celtic languages
* OPUS readme: [eng-cel](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md)
* model: transformer
* source language(s): eng
* target language(s): bre cor cym gla gle glv
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-bre.eng.bre | 11.5 | 0.338 |
| Tatoeba-test.eng-cor.eng.cor | 0.3 | 0.095 |
| Tatoeba-test.eng-cym.eng.cym | 31.0 | 0.549 |
| Tatoeba-test.eng-gla.eng.gla | 7.6 | 0.317 |
| Tatoeba-test.eng-gle.eng.gle | 35.9 | 0.582 |
| Tatoeba-test.eng-glv.eng.glv | 9.9 | 0.454 |
| Tatoeba-test.eng.multi | 18.0 | 0.342 |
### System Info:
- hf_name: eng-cel
- source_languages: eng
- target_languages: cel
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel']
- src_constituents: {'eng'}
- tgt_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cel/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: cel
- short_pair: en-cel
- chrF2_score: 0.342
- bleu: 18.0
- brevity_penalty: 0.9590000000000001
- ref_len: 45370.0
- src_name: English
- tgt_name: Celtic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: cel
- prefer_old: False
- long_pair: eng-cel
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "gd", "ga", "br", "kw", "gv", "cy", "cel"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-cel | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"gd",
"ga",
"br",
"kw",
"gv",
"cy",
"cel",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"gd",
"ga",
"br",
"kw",
"gv",
"cy",
"cel"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #gd #ga #br #kw #gv #cy #cel #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-cel
* source group: English
* target group: Celtic languages
* OPUS readme: eng-cel
* model: transformer
* source language(s): eng
* target language(s): bre cor cym gla gle glv
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 11.5, chr-F: 0.338
testset: URL, BLEU: 0.3, chr-F: 0.095
testset: URL, BLEU: 31.0, chr-F: 0.549
testset: URL, BLEU: 7.6, chr-F: 0.317
testset: URL, BLEU: 35.9, chr-F: 0.582
testset: URL, BLEU: 9.9, chr-F: 0.454
testset: URL, BLEU: 18.0, chr-F: 0.342
### System Info:
* hf\_name: eng-cel
* source\_languages: eng
* target\_languages: cel
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel']
* src\_constituents: {'eng'}
* tgt\_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: cel
* short\_pair: en-cel
* chrF2\_score: 0.342
* bleu: 18.0
* brevity\_penalty: 0.9590000000000001
* ref\_len: 45370.0
* src\_name: English
* tgt\_name: Celtic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: cel
* prefer\_old: False
* long\_pair: eng-cel
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-cel\n\n\n* source group: English\n* target group: Celtic languages\n* OPUS readme: eng-cel\n* model: transformer\n* source language(s): eng\n* target language(s): bre cor cym gla gle glv\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.5, chr-F: 0.338\ntestset: URL, BLEU: 0.3, chr-F: 0.095\ntestset: URL, BLEU: 31.0, chr-F: 0.549\ntestset: URL, BLEU: 7.6, chr-F: 0.317\ntestset: URL, BLEU: 35.9, chr-F: 0.582\ntestset: URL, BLEU: 9.9, chr-F: 0.454\ntestset: URL, BLEU: 18.0, chr-F: 0.342",
"### System Info:\n\n\n* hf\\_name: eng-cel\n* source\\_languages: eng\n* target\\_languages: cel\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cel\n* short\\_pair: en-cel\n* chrF2\\_score: 0.342\n* bleu: 18.0\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 45370.0\n* src\\_name: English\n* tgt\\_name: Celtic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: cel\n* prefer\\_old: False\n* long\\_pair: eng-cel\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gd #ga #br #kw #gv #cy #cel #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-cel\n\n\n* source group: English\n* target group: Celtic languages\n* OPUS readme: eng-cel\n* model: transformer\n* source language(s): eng\n* target language(s): bre cor cym gla gle glv\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.5, chr-F: 0.338\ntestset: URL, BLEU: 0.3, chr-F: 0.095\ntestset: URL, BLEU: 31.0, chr-F: 0.549\ntestset: URL, BLEU: 7.6, chr-F: 0.317\ntestset: URL, BLEU: 35.9, chr-F: 0.582\ntestset: URL, BLEU: 9.9, chr-F: 0.454\ntestset: URL, BLEU: 18.0, chr-F: 0.342",
"### System Info:\n\n\n* hf\\_name: eng-cel\n* source\\_languages: eng\n* target\\_languages: cel\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cel\n* short\\_pair: en-cel\n* chrF2\\_score: 0.342\n* bleu: 18.0\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 45370.0\n* src\\_name: English\n* tgt\\_name: Celtic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: cel\n* prefer\\_old: False\n* long\\_pair: eng-cel\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
66,
305,
459
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gd #ga #br #kw #gv #cy #cel #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-cel\n\n\n* source group: English\n* target group: Celtic languages\n* OPUS readme: eng-cel\n* model: transformer\n* source language(s): eng\n* target language(s): bre cor cym gla gle glv\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.5, chr-F: 0.338\ntestset: URL, BLEU: 0.3, chr-F: 0.095\ntestset: URL, BLEU: 31.0, chr-F: 0.549\ntestset: URL, BLEU: 7.6, chr-F: 0.317\ntestset: URL, BLEU: 35.9, chr-F: 0.582\ntestset: URL, BLEU: 9.9, chr-F: 0.454\ntestset: URL, BLEU: 18.0, chr-F: 0.342### System Info:\n\n\n* hf\\_name: eng-cel\n* source\\_languages: eng\n* target\\_languages: cel\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cel\n* short\\_pair: en-cel\n* chrF2\\_score: 0.342\n* bleu: 18.0\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 45370.0\n* src\\_name: English\n* tgt\\_name: Celtic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: cel\n* prefer\\_old: False\n* long\\_pair: eng-cel\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-chk
* source languages: en
* target languages: chk
* OPUS readme: [en-chk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-chk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-chk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.chk | 26.1 | 0.468 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-chk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"chk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #chk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-chk
* source languages: en
* target languages: chk
* OPUS readme: en-chk
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.1, chr-F: 0.468
| [
"### opus-mt-en-chk\n\n\n* source languages: en\n* target languages: chk\n* OPUS readme: en-chk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.468"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #chk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-chk\n\n\n* source languages: en\n* target languages: chk\n* OPUS readme: en-chk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.468"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #chk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-chk\n\n\n* source languages: en\n* target languages: chk\n* OPUS readme: en-chk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.468"
] |
translation | transformers |
### eng-cpf
* source group: English
* target group: Creoles and pidgins, French‑based
* OPUS readme: [eng-cpf](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpf/README.md)
* model: transformer
* source language(s): eng
* target language(s): gcf_Latn hat mfe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-gcf.eng.gcf | 6.2 | 0.262 |
| Tatoeba-test.eng-hat.eng.hat | 25.7 | 0.451 |
| Tatoeba-test.eng-mfe.eng.mfe | 80.1 | 0.900 |
| Tatoeba-test.eng.multi | 15.9 | 0.354 |
### System Info:
- hf_name: eng-cpf
- source_languages: eng
- target_languages: cpf
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpf/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ht', 'cpf']
- src_constituents: {'eng'}
- tgt_constituents: {'gcf_Latn', 'hat', 'mfe'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpf/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: cpf
- short_pair: en-cpf
- chrF2_score: 0.354
- bleu: 15.9
- brevity_penalty: 1.0
- ref_len: 1012.0
- src_name: English
- tgt_name: Creoles and pidgins, French‑based
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: cpf
- prefer_old: False
- long_pair: eng-cpf
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "ht", "cpf"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-cpf | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ht",
"cpf",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ht",
"cpf"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ht #cpf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-cpf
* source group: English
* target group: Creoles and pidgins, French‑based
* OPUS readme: eng-cpf
* model: transformer
* source language(s): eng
* target language(s): gcf\_Latn hat mfe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 6.2, chr-F: 0.262
testset: URL, BLEU: 25.7, chr-F: 0.451
testset: URL, BLEU: 80.1, chr-F: 0.900
testset: URL, BLEU: 15.9, chr-F: 0.354
### System Info:
* hf\_name: eng-cpf
* source\_languages: eng
* target\_languages: cpf
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'ht', 'cpf']
* src\_constituents: {'eng'}
* tgt\_constituents: {'gcf\_Latn', 'hat', 'mfe'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: cpf
* short\_pair: en-cpf
* chrF2\_score: 0.354
* bleu: 15.9
* brevity\_penalty: 1.0
* ref\_len: 1012.0
* src\_name: English
* tgt\_name: Creoles and pidgins, French‑based
* train\_date: 2020-07-26
* src\_alpha2: en
* tgt\_alpha2: cpf
* prefer\_old: False
* long\_pair: eng-cpf
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-cpf\n\n\n* source group: English\n* target group: Creoles and pidgins, French‑based\n* OPUS readme: eng-cpf\n* model: transformer\n* source language(s): eng\n* target language(s): gcf\\_Latn hat mfe\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.2, chr-F: 0.262\ntestset: URL, BLEU: 25.7, chr-F: 0.451\ntestset: URL, BLEU: 80.1, chr-F: 0.900\ntestset: URL, BLEU: 15.9, chr-F: 0.354",
"### System Info:\n\n\n* hf\\_name: eng-cpf\n* source\\_languages: eng\n* target\\_languages: cpf\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ht', 'cpf']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'gcf\\_Latn', 'hat', 'mfe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cpf\n* short\\_pair: en-cpf\n* chrF2\\_score: 0.354\n* bleu: 15.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 1012.0\n* src\\_name: English\n* tgt\\_name: Creoles and pidgins, French‑based\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: cpf\n* prefer\\_old: False\n* long\\_pair: eng-cpf\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ht #cpf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-cpf\n\n\n* source group: English\n* target group: Creoles and pidgins, French‑based\n* OPUS readme: eng-cpf\n* model: transformer\n* source language(s): eng\n* target language(s): gcf\\_Latn hat mfe\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.2, chr-F: 0.262\ntestset: URL, BLEU: 25.7, chr-F: 0.451\ntestset: URL, BLEU: 80.1, chr-F: 0.900\ntestset: URL, BLEU: 15.9, chr-F: 0.354",
"### System Info:\n\n\n* hf\\_name: eng-cpf\n* source\\_languages: eng\n* target\\_languages: cpf\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ht', 'cpf']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'gcf\\_Latn', 'hat', 'mfe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cpf\n* short\\_pair: en-cpf\n* chrF2\\_score: 0.354\n* bleu: 15.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 1012.0\n* src\\_name: English\n* tgt\\_name: Creoles and pidgins, French‑based\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: cpf\n* prefer\\_old: False\n* long\\_pair: eng-cpf\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
55,
240,
426
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ht #cpf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-cpf\n\n\n* source group: English\n* target group: Creoles and pidgins, French‑based\n* OPUS readme: eng-cpf\n* model: transformer\n* source language(s): eng\n* target language(s): gcf\\_Latn hat mfe\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.2, chr-F: 0.262\ntestset: URL, BLEU: 25.7, chr-F: 0.451\ntestset: URL, BLEU: 80.1, chr-F: 0.900\ntestset: URL, BLEU: 15.9, chr-F: 0.354### System Info:\n\n\n* hf\\_name: eng-cpf\n* source\\_languages: eng\n* target\\_languages: cpf\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ht', 'cpf']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'gcf\\_Latn', 'hat', 'mfe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cpf\n* short\\_pair: en-cpf\n* chrF2\\_score: 0.354\n* bleu: 15.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 1012.0\n* src\\_name: English\n* tgt\\_name: Creoles and pidgins, French‑based\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: cpf\n* prefer\\_old: False\n* long\\_pair: eng-cpf\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-cpp
* source group: English
* target group: Creoles and pidgins, Portuguese-based
* OPUS readme: [eng-cpp](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md)
* model: transformer
* source language(s): eng
* target language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-msa.eng.msa | 32.6 | 0.573 |
| Tatoeba-test.eng.multi | 32.7 | 0.574 |
| Tatoeba-test.eng-pap.eng.pap | 42.5 | 0.633 |
### System Info:
- hf_name: eng-cpp
- source_languages: eng
- target_languages: cpp
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cpp/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'id', 'cpp']
- src_constituents: {'eng'}
- tgt_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cpp/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: cpp
- short_pair: en-cpp
- chrF2_score: 0.574
- bleu: 32.7
- brevity_penalty: 0.996
- ref_len: 34010.0
- src_name: English
- tgt_name: Creoles and pidgins, Portuguese-based
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: cpp
- prefer_old: False
- long_pair: eng-cpp
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "id", "cpp"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-cpp | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"id",
"cpp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"id",
"cpp"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #id #cpp #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-cpp
* source group: English
* target group: Creoles and pidgins, Portuguese-based
* OPUS readme: eng-cpp
* model: transformer
* source language(s): eng
* target language(s): ind max\_Latn min pap tmw\_Latn zlm\_Latn zsm\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.6, chr-F: 0.573
testset: URL, BLEU: 32.7, chr-F: 0.574
testset: URL, BLEU: 42.5, chr-F: 0.633
### System Info:
* hf\_name: eng-cpp
* source\_languages: eng
* target\_languages: cpp
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'id', 'cpp']
* src\_constituents: {'eng'}
* tgt\_constituents: {'zsm\_Latn', 'ind', 'pap', 'min', 'tmw\_Latn', 'max\_Latn', 'zlm\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: cpp
* short\_pair: en-cpp
* chrF2\_score: 0.574
* bleu: 32.7
* brevity\_penalty: 0.996
* ref\_len: 34010.0
* src\_name: English
* tgt\_name: Creoles and pidgins, Portuguese-based
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: cpp
* prefer\_old: False
* long\_pair: eng-cpp
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-cpp\n\n\n* source group: English\n* target group: Creoles and pidgins, Portuguese-based\n* OPUS readme: eng-cpp\n* model: transformer\n* source language(s): eng\n* target language(s): ind max\\_Latn min pap tmw\\_Latn zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.573\ntestset: URL, BLEU: 32.7, chr-F: 0.574\ntestset: URL, BLEU: 42.5, chr-F: 0.633",
"### System Info:\n\n\n* hf\\_name: eng-cpp\n* source\\_languages: eng\n* target\\_languages: cpp\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'id', 'cpp']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'pap', 'min', 'tmw\\_Latn', 'max\\_Latn', 'zlm\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cpp\n* short\\_pair: en-cpp\n* chrF2\\_score: 0.574\n* bleu: 32.7\n* brevity\\_penalty: 0.996\n* ref\\_len: 34010.0\n* src\\_name: English\n* tgt\\_name: Creoles and pidgins, Portuguese-based\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: cpp\n* prefer\\_old: False\n* long\\_pair: eng-cpp\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #id #cpp #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-cpp\n\n\n* source group: English\n* target group: Creoles and pidgins, Portuguese-based\n* OPUS readme: eng-cpp\n* model: transformer\n* source language(s): eng\n* target language(s): ind max\\_Latn min pap tmw\\_Latn zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.573\ntestset: URL, BLEU: 32.7, chr-F: 0.574\ntestset: URL, BLEU: 42.5, chr-F: 0.633",
"### System Info:\n\n\n* hf\\_name: eng-cpp\n* source\\_languages: eng\n* target\\_languages: cpp\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'id', 'cpp']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'pap', 'min', 'tmw\\_Latn', 'max\\_Latn', 'zlm\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cpp\n* short\\_pair: en-cpp\n* chrF2\\_score: 0.574\n* bleu: 32.7\n* brevity\\_penalty: 0.996\n* ref\\_len: 34010.0\n* src\\_name: English\n* tgt\\_name: Creoles and pidgins, Portuguese-based\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: cpp\n* prefer\\_old: False\n* long\\_pair: eng-cpp\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
54,
242,
460
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #id #cpp #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-cpp\n\n\n* source group: English\n* target group: Creoles and pidgins, Portuguese-based\n* OPUS readme: eng-cpp\n* model: transformer\n* source language(s): eng\n* target language(s): ind max\\_Latn min pap tmw\\_Latn zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.573\ntestset: URL, BLEU: 32.7, chr-F: 0.574\ntestset: URL, BLEU: 42.5, chr-F: 0.633### System Info:\n\n\n* hf\\_name: eng-cpp\n* source\\_languages: eng\n* target\\_languages: cpp\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'id', 'cpp']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'pap', 'min', 'tmw\\_Latn', 'max\\_Latn', 'zlm\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cpp\n* short\\_pair: en-cpp\n* chrF2\\_score: 0.574\n* bleu: 32.7\n* brevity\\_penalty: 0.996\n* ref\\_len: 34010.0\n* src\\_name: English\n* tgt\\_name: Creoles and pidgins, Portuguese-based\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: cpp\n* prefer\\_old: False\n* long\\_pair: eng-cpp\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-crs
* source languages: en
* target languages: crs
* OPUS readme: [en-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-crs/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.crs | 45.2 | 0.617 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-crs | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"crs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #crs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-crs
* source languages: en
* target languages: crs
* OPUS readme: en-crs
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 45.2, chr-F: 0.617
| [
"### opus-mt-en-crs\n\n\n* source languages: en\n* target languages: crs\n* OPUS readme: en-crs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.2, chr-F: 0.617"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #crs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-crs\n\n\n* source languages: en\n* target languages: crs\n* OPUS readme: en-crs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.2, chr-F: 0.617"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #crs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-crs\n\n\n* source languages: en\n* target languages: crs\n* OPUS readme: en-crs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.2, chr-F: 0.617"
] |
translation | transformers |
### opus-mt-en-cs
* source languages: en
* target languages: cs
* OPUS readme: [en-cs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cs/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.cs | 22.8 | 0.507 |
| news-test2008.en.cs | 20.7 | 0.485 |
| newstest2009.en.cs | 21.8 | 0.500 |
| newstest2010.en.cs | 22.1 | 0.505 |
| newstest2011.en.cs | 23.2 | 0.507 |
| newstest2012.en.cs | 20.8 | 0.482 |
| newstest2013.en.cs | 24.7 | 0.514 |
| newstest2015-encs.en.cs | 24.9 | 0.527 |
| newstest2016-encs.en.cs | 26.7 | 0.540 |
| newstest2017-encs.en.cs | 22.7 | 0.503 |
| newstest2018-encs.en.cs | 22.9 | 0.504 |
| newstest2019-encs.en.cs | 24.9 | 0.518 |
| Tatoeba.en.cs | 46.1 | 0.647 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-cs | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"cs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #cs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-cs
* source languages: en
* target languages: cs
* OPUS readme: en-cs
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.8, chr-F: 0.507
testset: URL, BLEU: 20.7, chr-F: 0.485
testset: URL, BLEU: 21.8, chr-F: 0.500
testset: URL, BLEU: 22.1, chr-F: 0.505
testset: URL, BLEU: 23.2, chr-F: 0.507
testset: URL, BLEU: 20.8, chr-F: 0.482
testset: URL, BLEU: 24.7, chr-F: 0.514
testset: URL, BLEU: 24.9, chr-F: 0.527
testset: URL, BLEU: 26.7, chr-F: 0.540
testset: URL, BLEU: 22.7, chr-F: 0.503
testset: URL, BLEU: 22.9, chr-F: 0.504
testset: URL, BLEU: 24.9, chr-F: 0.518
testset: URL, BLEU: 46.1, chr-F: 0.647
| [
"### opus-mt-en-cs\n\n\n* source languages: en\n* target languages: cs\n* OPUS readme: en-cs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.507\ntestset: URL, BLEU: 20.7, chr-F: 0.485\ntestset: URL, BLEU: 21.8, chr-F: 0.500\ntestset: URL, BLEU: 22.1, chr-F: 0.505\ntestset: URL, BLEU: 23.2, chr-F: 0.507\ntestset: URL, BLEU: 20.8, chr-F: 0.482\ntestset: URL, BLEU: 24.7, chr-F: 0.514\ntestset: URL, BLEU: 24.9, chr-F: 0.527\ntestset: URL, BLEU: 26.7, chr-F: 0.540\ntestset: URL, BLEU: 22.7, chr-F: 0.503\ntestset: URL, BLEU: 22.9, chr-F: 0.504\ntestset: URL, BLEU: 24.9, chr-F: 0.518\ntestset: URL, BLEU: 46.1, chr-F: 0.647"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #cs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-cs\n\n\n* source languages: en\n* target languages: cs\n* OPUS readme: en-cs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.507\ntestset: URL, BLEU: 20.7, chr-F: 0.485\ntestset: URL, BLEU: 21.8, chr-F: 0.500\ntestset: URL, BLEU: 22.1, chr-F: 0.505\ntestset: URL, BLEU: 23.2, chr-F: 0.507\ntestset: URL, BLEU: 20.8, chr-F: 0.482\ntestset: URL, BLEU: 24.7, chr-F: 0.514\ntestset: URL, BLEU: 24.9, chr-F: 0.527\ntestset: URL, BLEU: 26.7, chr-F: 0.540\ntestset: URL, BLEU: 22.7, chr-F: 0.503\ntestset: URL, BLEU: 22.9, chr-F: 0.504\ntestset: URL, BLEU: 24.9, chr-F: 0.518\ntestset: URL, BLEU: 46.1, chr-F: 0.647"
] | [
51,
379
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #cs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-cs\n\n\n* source languages: en\n* target languages: cs\n* OPUS readme: en-cs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.507\ntestset: URL, BLEU: 20.7, chr-F: 0.485\ntestset: URL, BLEU: 21.8, chr-F: 0.500\ntestset: URL, BLEU: 22.1, chr-F: 0.505\ntestset: URL, BLEU: 23.2, chr-F: 0.507\ntestset: URL, BLEU: 20.8, chr-F: 0.482\ntestset: URL, BLEU: 24.7, chr-F: 0.514\ntestset: URL, BLEU: 24.9, chr-F: 0.527\ntestset: URL, BLEU: 26.7, chr-F: 0.540\ntestset: URL, BLEU: 22.7, chr-F: 0.503\ntestset: URL, BLEU: 22.9, chr-F: 0.504\ntestset: URL, BLEU: 24.9, chr-F: 0.518\ntestset: URL, BLEU: 46.1, chr-F: 0.647"
] |
translation | transformers |
### eng-cus
* source group: English
* target group: Cushitic languages
* OPUS readme: [eng-cus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cus/README.md)
* model: transformer
* source language(s): eng
* target language(s): som
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.multi | 16.0 | 0.173 |
| Tatoeba-test.eng-som.eng.som | 16.0 | 0.173 |
### System Info:
- hf_name: eng-cus
- source_languages: eng
- target_languages: cus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'so', 'cus']
- src_constituents: {'eng'}
- tgt_constituents: {'som'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-cus/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: cus
- short_pair: en-cus
- chrF2_score: 0.17300000000000001
- bleu: 16.0
- brevity_penalty: 1.0
- ref_len: 3.0
- src_name: English
- tgt_name: Cushitic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: cus
- prefer_old: False
- long_pair: eng-cus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "so", "cus"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-cus | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"so",
"cus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"so",
"cus"
] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #en #so #cus #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-cus
* source group: English
* target group: Cushitic languages
* OPUS readme: eng-cus
* model: transformer
* source language(s): eng
* target language(s): som
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 16.0, chr-F: 0.173
testset: URL, BLEU: 16.0, chr-F: 0.173
### System Info:
* hf\_name: eng-cus
* source\_languages: eng
* target\_languages: cus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'so', 'cus']
* src\_constituents: {'eng'}
* tgt\_constituents: {'som'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: cus
* short\_pair: en-cus
* chrF2\_score: 0.17300000000000001
* bleu: 16.0
* brevity\_penalty: 1.0
* ref\_len: 3.0
* src\_name: English
* tgt\_name: Cushitic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: cus
* prefer\_old: False
* long\_pair: eng-cus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-cus\n\n\n* source group: English\n* target group: Cushitic languages\n* OPUS readme: eng-cus\n* model: transformer\n* source language(s): eng\n* target language(s): som\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.0, chr-F: 0.173\ntestset: URL, BLEU: 16.0, chr-F: 0.173",
"### System Info:\n\n\n* hf\\_name: eng-cus\n* source\\_languages: eng\n* target\\_languages: cus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'so', 'cus']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'som'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cus\n* short\\_pair: en-cus\n* chrF2\\_score: 0.17300000000000001\n* bleu: 16.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 3.0\n* src\\_name: English\n* tgt\\_name: Cushitic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: cus\n* prefer\\_old: False\n* long\\_pair: eng-cus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #en #so #cus #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-cus\n\n\n* source group: English\n* target group: Cushitic languages\n* OPUS readme: eng-cus\n* model: transformer\n* source language(s): eng\n* target language(s): som\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.0, chr-F: 0.173\ntestset: URL, BLEU: 16.0, chr-F: 0.173",
"### System Info:\n\n\n* hf\\_name: eng-cus\n* source\\_languages: eng\n* target\\_languages: cus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'so', 'cus']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'som'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cus\n* short\\_pair: en-cus\n* chrF2\\_score: 0.17300000000000001\n* bleu: 16.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 3.0\n* src\\_name: English\n* tgt\\_name: Cushitic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: cus\n* prefer\\_old: False\n* long\\_pair: eng-cus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
58,
154,
410
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #en #so #cus #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-cus\n\n\n* source group: English\n* target group: Cushitic languages\n* OPUS readme: eng-cus\n* model: transformer\n* source language(s): eng\n* target language(s): som\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.0, chr-F: 0.173\ntestset: URL, BLEU: 16.0, chr-F: 0.173### System Info:\n\n\n* hf\\_name: eng-cus\n* source\\_languages: eng\n* target\\_languages: cus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'so', 'cus']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'som'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: cus\n* short\\_pair: en-cus\n* chrF2\\_score: 0.17300000000000001\n* bleu: 16.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 3.0\n* src\\_name: English\n* tgt\\_name: Cushitic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: cus\n* prefer\\_old: False\n* long\\_pair: eng-cus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-cy
* source languages: en
* target languages: cy
* OPUS readme: [en-cy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-cy/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-cy/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.cy | 25.3 | 0.487 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-cy | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"cy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #cy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-cy
* source languages: en
* target languages: cy
* OPUS readme: en-cy
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.3, chr-F: 0.487
| [
"### opus-mt-en-cy\n\n\n* source languages: en\n* target languages: cy\n* OPUS readme: en-cy\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.487"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #cy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-cy\n\n\n* source languages: en\n* target languages: cy\n* OPUS readme: en-cy\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.487"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #cy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-cy\n\n\n* source languages: en\n* target languages: cy\n* OPUS readme: en-cy\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.487"
] |
translation | transformers |
### opus-mt-en-da
* source languages: en
* target languages: da
* OPUS readme: [en-da](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-da/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-da/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.da | 60.4 | 0.745 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-da | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-da
* source languages: en
* target languages: da
* OPUS readme: en-da
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 60.4, chr-F: 0.745
| [
"### opus-mt-en-da\n\n\n* source languages: en\n* target languages: da\n* OPUS readme: en-da\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.4, chr-F: 0.745"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-da\n\n\n* source languages: en\n* target languages: da\n* OPUS readme: en-da\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.4, chr-F: 0.745"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-da\n\n\n* source languages: en\n* target languages: da\n* OPUS readme: en-da\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.4, chr-F: 0.745"
] |
translation | transformers |
### opus-mt-en-de
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation
- **Language(s):**
- Source Language: English
- Target Language: German
- **License:** CC-BY-4.0
- **Resources for more information:**
- [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Uses
#### Direct Use
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Further details about the dataset for this model can be found in the OPUS readme: [en-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-de/README.md)
#### Training Data
##### Preprocessing
* pre-processing: normalization + SentencePiece
* dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT)
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.test.txt)
## Evaluation
#### Results
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.eval.txt)
#### Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.de | 23.5 | 0.540 |
| news-test2008.en.de | 23.5 | 0.529 |
| newstest2009.en.de | 22.3 | 0.530 |
| newstest2010.en.de | 24.9 | 0.544 |
| newstest2011.en.de | 22.5 | 0.524 |
| newstest2012.en.de | 23.0 | 0.525 |
| newstest2013.en.de | 26.9 | 0.553 |
| newstest2015-ende.en.de | 31.1 | 0.594 |
| newstest2016-ende.en.de | 37.0 | 0.636 |
| newstest2017-ende.en.de | 29.9 | 0.586 |
| newstest2018-ende.en.de | 45.2 | 0.690 |
| newstest2019-ende.en.de | 40.9 | 0.654 |
| Tatoeba.en.de | 47.3 | 0.664 |
## Citation Information
```bibtex
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de")
```
| {"license": "cc-by-4.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-de | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"de",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #jax #rust #marian #text2text-generation #translation #en #de #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-de
Table of Contents
-----------------
* Model Details
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Citation Information
* How to Get Started With the Model
Model Details
-------------
Model Description:
* Developed by: Language Technology Research Group at the University of Helsinki
* Model Type: Translation
* Language(s):
+ Source Language: English
+ Target Language: German
* License: CC-BY-4.0
* Resources for more information:
+ GitHub Repo
Uses
----
#### Direct Use
This model can be used for translation and text-to-text generation.
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Further details about the dataset for this model can be found in the OPUS readme: en-de
#### Training Data
##### Preprocessing
* pre-processing: normalization + SentencePiece
* dataset: opus
* download original weights: URL
* test set translations: URL
Evaluation
----------
#### Results
* test set scores: URL
#### Benchmarks
testset: URL, BLEU: 23.5, chr-F: 0.540
testset: URL, BLEU: 23.5, chr-F: 0.529
testset: URL, BLEU: 22.3, chr-F: 0.530
testset: URL, BLEU: 24.9, chr-F: 0.544
testset: URL, BLEU: 22.5, chr-F: 0.524
testset: URL, BLEU: 23.0, chr-F: 0.525
testset: URL, BLEU: 26.9, chr-F: 0.553
testset: URL, BLEU: 31.1, chr-F: 0.594
testset: URL, BLEU: 37.0, chr-F: 0.636
testset: URL, BLEU: 29.9, chr-F: 0.586
testset: URL, BLEU: 45.2, chr-F: 0.690
testset: URL, BLEU: 40.9, chr-F: 0.654
testset: URL, BLEU: 47.3, chr-F: 0.664
How to Get Started With the Model
---------------------------------
| [
"### opus-mt-en-de\n\n\nTable of Contents\n-----------------\n\n\n* Model Details\n* Uses\n* Risks, Limitations and Biases\n* Training\n* Evaluation\n* Citation Information\n* How to Get Started With the Model\n\n\nModel Details\n-------------\n\n\nModel Description:\n\n\n* Developed by: Language Technology Research Group at the University of Helsinki\n* Model Type: Translation\n* Language(s):\n\t+ Source Language: English\n\t+ Target Language: German\n* License: CC-BY-4.0\n* Resources for more information:\n\t+ GitHub Repo\n\n\nUses\n----",
"#### Direct Use\n\n\nThis model can be used for translation and text-to-text generation.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nFurther details about the dataset for this model can be found in the OPUS readme: en-de",
"#### Training Data",
"##### Preprocessing\n\n\n* pre-processing: normalization + SentencePiece\n* dataset: opus\n* download original weights: URL\n* test set translations: URL\n\n\nEvaluation\n----------",
"#### Results\n\n\n* test set scores: URL",
"#### Benchmarks\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.540\ntestset: URL, BLEU: 23.5, chr-F: 0.529\ntestset: URL, BLEU: 22.3, chr-F: 0.530\ntestset: URL, BLEU: 24.9, chr-F: 0.544\ntestset: URL, BLEU: 22.5, chr-F: 0.524\ntestset: URL, BLEU: 23.0, chr-F: 0.525\ntestset: URL, BLEU: 26.9, chr-F: 0.553\ntestset: URL, BLEU: 31.1, chr-F: 0.594\ntestset: URL, BLEU: 37.0, chr-F: 0.636\ntestset: URL, BLEU: 29.9, chr-F: 0.586\ntestset: URL, BLEU: 45.2, chr-F: 0.690\ntestset: URL, BLEU: 40.9, chr-F: 0.654\ntestset: URL, BLEU: 47.3, chr-F: 0.664\n\n\nHow to Get Started With the Model\n---------------------------------"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #marian #text2text-generation #translation #en #de #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-de\n\n\nTable of Contents\n-----------------\n\n\n* Model Details\n* Uses\n* Risks, Limitations and Biases\n* Training\n* Evaluation\n* Citation Information\n* How to Get Started With the Model\n\n\nModel Details\n-------------\n\n\nModel Description:\n\n\n* Developed by: Language Technology Research Group at the University of Helsinki\n* Model Type: Translation\n* Language(s):\n\t+ Source Language: English\n\t+ Target Language: German\n* License: CC-BY-4.0\n* Resources for more information:\n\t+ GitHub Repo\n\n\nUses\n----",
"#### Direct Use\n\n\nThis model can be used for translation and text-to-text generation.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nFurther details about the dataset for this model can be found in the OPUS readme: en-de",
"#### Training Data",
"##### Preprocessing\n\n\n* pre-processing: normalization + SentencePiece\n* dataset: opus\n* download original weights: URL\n* test set translations: URL\n\n\nEvaluation\n----------",
"#### Results\n\n\n* test set scores: URL",
"#### Benchmarks\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.540\ntestset: URL, BLEU: 23.5, chr-F: 0.529\ntestset: URL, BLEU: 22.3, chr-F: 0.530\ntestset: URL, BLEU: 24.9, chr-F: 0.544\ntestset: URL, BLEU: 22.5, chr-F: 0.524\ntestset: URL, BLEU: 23.0, chr-F: 0.525\ntestset: URL, BLEU: 26.9, chr-F: 0.553\ntestset: URL, BLEU: 31.1, chr-F: 0.594\ntestset: URL, BLEU: 37.0, chr-F: 0.636\ntestset: URL, BLEU: 29.9, chr-F: 0.586\ntestset: URL, BLEU: 45.2, chr-F: 0.690\ntestset: URL, BLEU: 40.9, chr-F: 0.654\ntestset: URL, BLEU: 47.3, chr-F: 0.664\n\n\nHow to Get Started With the Model\n---------------------------------"
] | [
57,
136,
140,
6,
49,
12,
341
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #marian #text2text-generation #translation #en #de #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-de\n\n\nTable of Contents\n-----------------\n\n\n* Model Details\n* Uses\n* Risks, Limitations and Biases\n* Training\n* Evaluation\n* Citation Information\n* How to Get Started With the Model\n\n\nModel Details\n-------------\n\n\nModel Description:\n\n\n* Developed by: Language Technology Research Group at the University of Helsinki\n* Model Type: Translation\n* Language(s):\n\t+ Source Language: English\n\t+ Target Language: German\n* License: CC-BY-4.0\n* Resources for more information:\n\t+ GitHub Repo\n\n\nUses\n----#### Direct Use\n\n\nThis model can be used for translation and text-to-text generation.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nFurther details about the dataset for this model can be found in the OPUS readme: en-de#### Training Data##### Preprocessing\n\n\n* pre-processing: normalization + SentencePiece\n* dataset: opus\n* download original weights: URL\n* test set translations: URL\n\n\nEvaluation\n----------#### Results\n\n\n* test set scores: URL#### Benchmarks\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.540\ntestset: URL, BLEU: 23.5, chr-F: 0.529\ntestset: URL, BLEU: 22.3, chr-F: 0.530\ntestset: URL, BLEU: 24.9, chr-F: 0.544\ntestset: URL, BLEU: 22.5, chr-F: 0.524\ntestset: URL, BLEU: 23.0, chr-F: 0.525\ntestset: URL, BLEU: 26.9, chr-F: 0.553\ntestset: URL, BLEU: 31.1, chr-F: 0.594\ntestset: URL, BLEU: 37.0, chr-F: 0.636\ntestset: URL, BLEU: 29.9, chr-F: 0.586\ntestset: URL, BLEU: 45.2, chr-F: 0.690\ntestset: URL, BLEU: 40.9, chr-F: 0.654\ntestset: URL, BLEU: 47.3, chr-F: 0.664\n\n\nHow to Get Started With the Model\n---------------------------------"
] |
translation | transformers |
### eng-dra
* source group: English
* target group: Dravidian languages
* OPUS readme: [eng-dra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md)
* model: transformer
* source language(s): eng
* target language(s): kan mal tam tel
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-kan.eng.kan | 4.7 | 0.348 |
| Tatoeba-test.eng-mal.eng.mal | 13.1 | 0.515 |
| Tatoeba-test.eng.multi | 10.7 | 0.463 |
| Tatoeba-test.eng-tam.eng.tam | 9.0 | 0.444 |
| Tatoeba-test.eng-tel.eng.tel | 7.1 | 0.363 |
### System Info:
- hf_name: eng-dra
- source_languages: eng
- target_languages: dra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-dra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra']
- src_constituents: {'eng'}
- tgt_constituents: {'tam', 'kan', 'mal', 'tel'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-dra/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: dra
- short_pair: en-dra
- chrF2_score: 0.46299999999999997
- bleu: 10.7
- brevity_penalty: 1.0
- ref_len: 7928.0
- src_name: English
- tgt_name: Dravidian languages
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: dra
- prefer_old: False
- long_pair: eng-dra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "ta", "kn", "ml", "te", "dra"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-dra | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ta",
"kn",
"ml",
"te",
"dra",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ta",
"kn",
"ml",
"te",
"dra"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ta #kn #ml #te #dra #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-dra
* source group: English
* target group: Dravidian languages
* OPUS readme: eng-dra
* model: transformer
* source language(s): eng
* target language(s): kan mal tam tel
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 4.7, chr-F: 0.348
testset: URL, BLEU: 13.1, chr-F: 0.515
testset: URL, BLEU: 10.7, chr-F: 0.463
testset: URL, BLEU: 9.0, chr-F: 0.444
testset: URL, BLEU: 7.1, chr-F: 0.363
### System Info:
* hf\_name: eng-dra
* source\_languages: eng
* target\_languages: dra
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra']
* src\_constituents: {'eng'}
* tgt\_constituents: {'tam', 'kan', 'mal', 'tel'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: dra
* short\_pair: en-dra
* chrF2\_score: 0.46299999999999997
* bleu: 10.7
* brevity\_penalty: 1.0
* ref\_len: 7928.0
* src\_name: English
* tgt\_name: Dravidian languages
* train\_date: 2020-07-26
* src\_alpha2: en
* tgt\_alpha2: dra
* prefer\_old: False
* long\_pair: eng-dra
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-dra\n\n\n* source group: English\n* target group: Dravidian languages\n* OPUS readme: eng-dra\n* model: transformer\n* source language(s): eng\n* target language(s): kan mal tam tel\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.7, chr-F: 0.348\ntestset: URL, BLEU: 13.1, chr-F: 0.515\ntestset: URL, BLEU: 10.7, chr-F: 0.463\ntestset: URL, BLEU: 9.0, chr-F: 0.444\ntestset: URL, BLEU: 7.1, chr-F: 0.363",
"### System Info:\n\n\n* hf\\_name: eng-dra\n* source\\_languages: eng\n* target\\_languages: dra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'tam', 'kan', 'mal', 'tel'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: dra\n* short\\_pair: en-dra\n* chrF2\\_score: 0.46299999999999997\n* bleu: 10.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 7928.0\n* src\\_name: English\n* tgt\\_name: Dravidian languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: dra\n* prefer\\_old: False\n* long\\_pair: eng-dra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ta #kn #ml #te #dra #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-dra\n\n\n* source group: English\n* target group: Dravidian languages\n* OPUS readme: eng-dra\n* model: transformer\n* source language(s): eng\n* target language(s): kan mal tam tel\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.7, chr-F: 0.348\ntestset: URL, BLEU: 13.1, chr-F: 0.515\ntestset: URL, BLEU: 10.7, chr-F: 0.463\ntestset: URL, BLEU: 9.0, chr-F: 0.444\ntestset: URL, BLEU: 7.1, chr-F: 0.363",
"### System Info:\n\n\n* hf\\_name: eng-dra\n* source\\_languages: eng\n* target\\_languages: dra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'tam', 'kan', 'mal', 'tel'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: dra\n* short\\_pair: en-dra\n* chrF2\\_score: 0.46299999999999997\n* bleu: 10.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 7928.0\n* src\\_name: English\n* tgt\\_name: Dravidian languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: dra\n* prefer\\_old: False\n* long\\_pair: eng-dra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
60,
254,
441
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ta #kn #ml #te #dra #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-dra\n\n\n* source group: English\n* target group: Dravidian languages\n* OPUS readme: eng-dra\n* model: transformer\n* source language(s): eng\n* target language(s): kan mal tam tel\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 4.7, chr-F: 0.348\ntestset: URL, BLEU: 13.1, chr-F: 0.515\ntestset: URL, BLEU: 10.7, chr-F: 0.463\ntestset: URL, BLEU: 9.0, chr-F: 0.444\ntestset: URL, BLEU: 7.1, chr-F: 0.363### System Info:\n\n\n* hf\\_name: eng-dra\n* source\\_languages: eng\n* target\\_languages: dra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ta', 'kn', 'ml', 'te', 'dra']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'tam', 'kan', 'mal', 'tel'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: dra\n* short\\_pair: en-dra\n* chrF2\\_score: 0.46299999999999997\n* bleu: 10.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 7928.0\n* src\\_name: English\n* tgt\\_name: Dravidian languages\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: dra\n* prefer\\_old: False\n* long\\_pair: eng-dra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-ee
* source languages: en
* target languages: ee
* OPUS readme: [en-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ee/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ee/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ee | 38.2 | 0.591 |
| Tatoeba.en.ee | 6.0 | 0.347 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ee | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ee",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ee #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ee
* source languages: en
* target languages: ee
* OPUS readme: en-ee
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.2, chr-F: 0.591
testset: URL, BLEU: 6.0, chr-F: 0.347
| [
"### opus-mt-en-ee\n\n\n* source languages: en\n* target languages: ee\n* OPUS readme: en-ee\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.591\ntestset: URL, BLEU: 6.0, chr-F: 0.347"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ee #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ee\n\n\n* source languages: en\n* target languages: ee\n* OPUS readme: en-ee\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.591\ntestset: URL, BLEU: 6.0, chr-F: 0.347"
] | [
51,
129
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ee #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ee\n\n\n* source languages: en\n* target languages: ee\n* OPUS readme: en-ee\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.591\ntestset: URL, BLEU: 6.0, chr-F: 0.347"
] |
translation | transformers |
### opus-mt-en-efi
* source languages: en
* target languages: efi
* OPUS readme: [en-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.efi | 38.0 | 0.568 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-efi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"efi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #efi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-efi
* source languages: en
* target languages: efi
* OPUS readme: en-efi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.0, chr-F: 0.568
| [
"### opus-mt-en-efi\n\n\n* source languages: en\n* target languages: efi\n* OPUS readme: en-efi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.568"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #efi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-efi\n\n\n* source languages: en\n* target languages: efi\n* OPUS readme: en-efi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.568"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #efi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-efi\n\n\n* source languages: en\n* target languages: efi\n* OPUS readme: en-efi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.568"
] |
translation | transformers |
### opus-mt-en-el
* source languages: en
* target languages: el
* OPUS readme: [en-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-el/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.el | 56.4 | 0.745 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-el | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-el
* source languages: en
* target languages: el
* OPUS readme: en-el
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 56.4, chr-F: 0.745
| [
"### opus-mt-en-el\n\n\n* source languages: en\n* target languages: el\n* OPUS readme: en-el\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.4, chr-F: 0.745"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-el\n\n\n* source languages: en\n* target languages: el\n* OPUS readme: en-el\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.4, chr-F: 0.745"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-el\n\n\n* source languages: en\n* target languages: el\n* OPUS readme: en-el\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.4, chr-F: 0.745"
] |
translation | transformers |
### opus-mt-en-eo
* source languages: en
* target languages: eo
* OPUS readme: [en-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-eo/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.eo | 49.5 | 0.682 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-eo
* source languages: en
* target languages: eo
* OPUS readme: en-eo
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.5, chr-F: 0.682
| [
"### opus-mt-en-eo\n\n\n* source languages: en\n* target languages: eo\n* OPUS readme: en-eo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.5, chr-F: 0.682"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-eo\n\n\n* source languages: en\n* target languages: eo\n* OPUS readme: en-eo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.5, chr-F: 0.682"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-eo\n\n\n* source languages: en\n* target languages: eo\n* OPUS readme: en-eo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.5, chr-F: 0.682"
] |
translation | transformers |
### eng-spa
* source group: English
* target group: Spanish
* OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md)
* model: transformer
* source language(s): eng
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip)
* test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt)
* test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 |
| news-test2008-engspa.eng.spa | 29.7 | 0.564 |
| newstest2009-engspa.eng.spa | 30.2 | 0.578 |
| newstest2010-engspa.eng.spa | 36.9 | 0.620 |
| newstest2011-engspa.eng.spa | 38.2 | 0.619 |
| newstest2012-engspa.eng.spa | 39.0 | 0.625 |
| newstest2013-engspa.eng.spa | 35.0 | 0.598 |
| Tatoeba-test.eng.spa | 54.9 | 0.721 |
### System Info:
- hf_name: eng-spa
- source_languages: eng
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'es']
- src_constituents: {'eng'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt
- src_alpha3: eng
- tgt_alpha3: spa
- short_pair: en-es
- chrF2_score: 0.721
- bleu: 54.9
- brevity_penalty: 0.978
- ref_len: 77311.0
- src_name: English
- tgt_name: Spanish
- train_date: 2020-08-18 00:00:00
- src_alpha2: en
- tgt_alpha2: es
- prefer_old: False
- long_pair: eng-spa
- helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82
- transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9
- port_machine: brutasse
- port_time: 2020-08-24-18:20 | {"language": ["en", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-es | null | [
"transformers",
"pytorch",
"tf",
"jax",
"marian",
"text2text-generation",
"translation",
"en",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"es"
] | TAGS
#transformers #pytorch #tf #jax #marian #text2text-generation #translation #en #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-spa
* source group: English
* target group: Spanish
* OPUS readme: eng-spa
* model: transformer
* source language(s): eng
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.0, chr-F: 0.583
testset: URL, BLEU: 29.7, chr-F: 0.564
testset: URL, BLEU: 30.2, chr-F: 0.578
testset: URL, BLEU: 36.9, chr-F: 0.620
testset: URL, BLEU: 38.2, chr-F: 0.619
testset: URL, BLEU: 39.0, chr-F: 0.625
testset: URL, BLEU: 35.0, chr-F: 0.598
testset: URL, BLEU: 54.9, chr-F: 0.721
### System Info:
* hf\_name: eng-spa
* source\_languages: eng
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'es']
* src\_constituents: {'eng'}
* tgt\_constituents: {'spa'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: spa
* short\_pair: en-es
* chrF2\_score: 0.721
* bleu: 54.9
* brevity\_penalty: 0.978
* ref\_len: 77311.0
* src\_name: English
* tgt\_name: Spanish
* train\_date: 2020-08-18 00:00:00
* src\_alpha2: en
* tgt\_alpha2: es
* prefer\_old: False
* long\_pair: eng-spa
* helsinki\_git\_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82
* transformers\_git\_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9
* port\_machine: brutasse
* port\_time: 2020-08-24-18:20
| [
"### eng-spa\n\n\n* source group: English\n* target group: Spanish\n* OPUS readme: eng-spa\n* model: transformer\n* source language(s): eng\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.0, chr-F: 0.583\ntestset: URL, BLEU: 29.7, chr-F: 0.564\ntestset: URL, BLEU: 30.2, chr-F: 0.578\ntestset: URL, BLEU: 36.9, chr-F: 0.620\ntestset: URL, BLEU: 38.2, chr-F: 0.619\ntestset: URL, BLEU: 39.0, chr-F: 0.625\ntestset: URL, BLEU: 35.0, chr-F: 0.598\ntestset: URL, BLEU: 54.9, chr-F: 0.721",
"### System Info:\n\n\n* hf\\_name: eng-spa\n* source\\_languages: eng\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'es']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: spa\n* short\\_pair: en-es\n* chrF2\\_score: 0.721\n* bleu: 54.9\n* brevity\\_penalty: 0.978\n* ref\\_len: 77311.0\n* src\\_name: English\n* tgt\\_name: Spanish\n* train\\_date: 2020-08-18 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: eng-spa\n* helsinki\\_git\\_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82\n* transformers\\_git\\_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9\n* port\\_machine: brutasse\n* port\\_time: 2020-08-24-18:20"
] | [
"TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #en #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-spa\n\n\n* source group: English\n* target group: Spanish\n* OPUS readme: eng-spa\n* model: transformer\n* source language(s): eng\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.0, chr-F: 0.583\ntestset: URL, BLEU: 29.7, chr-F: 0.564\ntestset: URL, BLEU: 30.2, chr-F: 0.578\ntestset: URL, BLEU: 36.9, chr-F: 0.620\ntestset: URL, BLEU: 38.2, chr-F: 0.619\ntestset: URL, BLEU: 39.0, chr-F: 0.625\ntestset: URL, BLEU: 35.0, chr-F: 0.598\ntestset: URL, BLEU: 54.9, chr-F: 0.721",
"### System Info:\n\n\n* hf\\_name: eng-spa\n* source\\_languages: eng\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'es']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: spa\n* short\\_pair: en-es\n* chrF2\\_score: 0.721\n* bleu: 54.9\n* brevity\\_penalty: 0.978\n* ref\\_len: 77311.0\n* src\\_name: English\n* tgt\\_name: Spanish\n* train\\_date: 2020-08-18 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: eng-spa\n* helsinki\\_git\\_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82\n* transformers\\_git\\_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9\n* port\\_machine: brutasse\n* port\\_time: 2020-08-24-18:20"
] | [
53,
286,
402
] | [
"TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #en #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-spa\n\n\n* source group: English\n* target group: Spanish\n* OPUS readme: eng-spa\n* model: transformer\n* source language(s): eng\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.0, chr-F: 0.583\ntestset: URL, BLEU: 29.7, chr-F: 0.564\ntestset: URL, BLEU: 30.2, chr-F: 0.578\ntestset: URL, BLEU: 36.9, chr-F: 0.620\ntestset: URL, BLEU: 38.2, chr-F: 0.619\ntestset: URL, BLEU: 39.0, chr-F: 0.625\ntestset: URL, BLEU: 35.0, chr-F: 0.598\ntestset: URL, BLEU: 54.9, chr-F: 0.721### System Info:\n\n\n* hf\\_name: eng-spa\n* source\\_languages: eng\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'es']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: spa\n* short\\_pair: en-es\n* chrF2\\_score: 0.721\n* bleu: 54.9\n* brevity\\_penalty: 0.978\n* ref\\_len: 77311.0\n* src\\_name: English\n* tgt\\_name: Spanish\n* train\\_date: 2020-08-18 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: eng-spa\n* helsinki\\_git\\_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82\n* transformers\\_git\\_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9\n* port\\_machine: brutasse\n* port\\_time: 2020-08-24-18:20"
] |
translation | transformers |
### opus-mt-en-et
* source languages: en
* target languages: et
* OPUS readme: [en-et](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-et/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-et/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2018-enet.en.et | 21.8 | 0.540 |
| newstest2018-enet.en.et | 23.3 | 0.556 |
| Tatoeba.en.et | 54.0 | 0.717 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-et | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"et",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #et #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-et
* source languages: en
* target languages: et
* OPUS readme: en-et
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.8, chr-F: 0.540
testset: URL, BLEU: 23.3, chr-F: 0.556
testset: URL, BLEU: 54.0, chr-F: 0.717
| [
"### opus-mt-en-et\n\n\n* source languages: en\n* target languages: et\n* OPUS readme: en-et\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.540\ntestset: URL, BLEU: 23.3, chr-F: 0.556\ntestset: URL, BLEU: 54.0, chr-F: 0.717"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #et #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-et\n\n\n* source languages: en\n* target languages: et\n* OPUS readme: en-et\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.540\ntestset: URL, BLEU: 23.3, chr-F: 0.556\ntestset: URL, BLEU: 54.0, chr-F: 0.717"
] | [
51,
151
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #et #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-et\n\n\n* source languages: en\n* target languages: et\n* OPUS readme: en-et\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.540\ntestset: URL, BLEU: 23.3, chr-F: 0.556\ntestset: URL, BLEU: 54.0, chr-F: 0.717"
] |
translation | transformers |
### eng-eus
* source group: English
* target group: Basque
* OPUS readme: [eng-eus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.eus | 31.8 | 0.590 |
### System Info:
- hf_name: eng-eus
- source_languages: eng
- target_languages: eus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-eus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'eu']
- src_constituents: {'eng'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-eus/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: eus
- short_pair: en-eu
- chrF2_score: 0.59
- bleu: 31.8
- brevity_penalty: 0.9440000000000001
- ref_len: 7080.0
- src_name: English
- tgt_name: Basque
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: eu
- prefer_old: False
- long_pair: eng-eus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "eu"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-eu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"eu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"eu"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #eu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-eus
* source group: English
* target group: Basque
* OPUS readme: eng-eus
* model: transformer-align
* source language(s): eng
* target language(s): eus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.8, chr-F: 0.590
### System Info:
* hf\_name: eng-eus
* source\_languages: eng
* target\_languages: eus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'eu']
* src\_constituents: {'eng'}
* tgt\_constituents: {'eus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: eus
* short\_pair: en-eu
* chrF2\_score: 0.59
* bleu: 31.8
* brevity\_penalty: 0.9440000000000001
* ref\_len: 7080.0
* src\_name: English
* tgt\_name: Basque
* train\_date: 2020-06-17
* src\_alpha2: en
* tgt\_alpha2: eu
* prefer\_old: False
* long\_pair: eng-eus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-eus\n\n\n* source group: English\n* target group: Basque\n* OPUS readme: eng-eus\n* model: transformer-align\n* source language(s): eng\n* target language(s): eus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.8, chr-F: 0.590",
"### System Info:\n\n\n* hf\\_name: eng-eus\n* source\\_languages: eng\n* target\\_languages: eus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'eu']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'eus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: eus\n* short\\_pair: en-eu\n* chrF2\\_score: 0.59\n* bleu: 31.8\n* brevity\\_penalty: 0.9440000000000001\n* ref\\_len: 7080.0\n* src\\_name: English\n* tgt\\_name: Basque\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: eu\n* prefer\\_old: False\n* long\\_pair: eng-eus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #eu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-eus\n\n\n* source group: English\n* target group: Basque\n* OPUS readme: eng-eus\n* model: transformer-align\n* source language(s): eng\n* target language(s): eus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.8, chr-F: 0.590",
"### System Info:\n\n\n* hf\\_name: eng-eus\n* source\\_languages: eng\n* target\\_languages: eus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'eu']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'eus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: eus\n* short\\_pair: en-eu\n* chrF2\\_score: 0.59\n* bleu: 31.8\n* brevity\\_penalty: 0.9440000000000001\n* ref\\_len: 7080.0\n* src\\_name: English\n* tgt\\_name: Basque\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: eu\n* prefer\\_old: False\n* long\\_pair: eng-eus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
133,
401
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #eu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-eus\n\n\n* source group: English\n* target group: Basque\n* OPUS readme: eng-eus\n* model: transformer-align\n* source language(s): eng\n* target language(s): eus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.8, chr-F: 0.590### System Info:\n\n\n* hf\\_name: eng-eus\n* source\\_languages: eng\n* target\\_languages: eus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'eu']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'eus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: eus\n* short\\_pair: en-eu\n* chrF2\\_score: 0.59\n* bleu: 31.8\n* brevity\\_penalty: 0.9440000000000001\n* ref\\_len: 7080.0\n* src\\_name: English\n* tgt\\_name: Basque\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: eu\n* prefer\\_old: False\n* long\\_pair: eng-eus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-euq
* source group: English
* target group: Basque (family)
* OPUS readme: [eng-euq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md)
* model: transformer
* source language(s): eng
* target language(s): eus
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.eus | 27.9 | 0.555 |
| Tatoeba-test.eng-eus.eng.eus | 27.9 | 0.555 |
### System Info:
- hf_name: eng-euq
- source_languages: eng
- target_languages: euq
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-euq/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'euq']
- src_constituents: {'eng'}
- tgt_constituents: {'eus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-euq/opus-2020-07-26.test.txt
- src_alpha3: eng
- tgt_alpha3: euq
- short_pair: en-euq
- chrF2_score: 0.555
- bleu: 27.9
- brevity_penalty: 0.917
- ref_len: 7080.0
- src_name: English
- tgt_name: Basque (family)
- train_date: 2020-07-26
- src_alpha2: en
- tgt_alpha2: euq
- prefer_old: False
- long_pair: eng-euq
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "euq"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-euq | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"euq",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"euq"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #euq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-euq
* source group: English
* target group: Basque (family)
* OPUS readme: eng-euq
* model: transformer
* source language(s): eng
* target language(s): eus
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.9, chr-F: 0.555
testset: URL, BLEU: 27.9, chr-F: 0.555
### System Info:
* hf\_name: eng-euq
* source\_languages: eng
* target\_languages: euq
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'euq']
* src\_constituents: {'eng'}
* tgt\_constituents: {'eus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: euq
* short\_pair: en-euq
* chrF2\_score: 0.555
* bleu: 27.9
* brevity\_penalty: 0.917
* ref\_len: 7080.0
* src\_name: English
* tgt\_name: Basque (family)
* train\_date: 2020-07-26
* src\_alpha2: en
* tgt\_alpha2: euq
* prefer\_old: False
* long\_pair: eng-euq
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-euq\n\n\n* source group: English\n* target group: Basque (family)\n* OPUS readme: eng-euq\n* model: transformer\n* source language(s): eng\n* target language(s): eus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.555\ntestset: URL, BLEU: 27.9, chr-F: 0.555",
"### System Info:\n\n\n* hf\\_name: eng-euq\n* source\\_languages: eng\n* target\\_languages: euq\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'euq']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'eus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: euq\n* short\\_pair: en-euq\n* chrF2\\_score: 0.555\n* bleu: 27.9\n* brevity\\_penalty: 0.917\n* ref\\_len: 7080.0\n* src\\_name: English\n* tgt\\_name: Basque (family)\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: euq\n* prefer\\_old: False\n* long\\_pair: eng-euq\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #euq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-euq\n\n\n* source group: English\n* target group: Basque (family)\n* OPUS readme: eng-euq\n* model: transformer\n* source language(s): eng\n* target language(s): eus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.555\ntestset: URL, BLEU: 27.9, chr-F: 0.555",
"### System Info:\n\n\n* hf\\_name: eng-euq\n* source\\_languages: eng\n* target\\_languages: euq\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'euq']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'eus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: euq\n* short\\_pair: en-euq\n* chrF2\\_score: 0.555\n* bleu: 27.9\n* brevity\\_penalty: 0.917\n* ref\\_len: 7080.0\n* src\\_name: English\n* tgt\\_name: Basque (family)\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: euq\n* prefer\\_old: False\n* long\\_pair: eng-euq\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
154,
401
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #euq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-euq\n\n\n* source group: English\n* target group: Basque (family)\n* OPUS readme: eng-euq\n* model: transformer\n* source language(s): eng\n* target language(s): eus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.555\ntestset: URL, BLEU: 27.9, chr-F: 0.555### System Info:\n\n\n* hf\\_name: eng-euq\n* source\\_languages: eng\n* target\\_languages: euq\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'euq']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'eus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: euq\n* short\\_pair: en-euq\n* chrF2\\_score: 0.555\n* bleu: 27.9\n* brevity\\_penalty: 0.917\n* ref\\_len: 7080.0\n* src\\_name: English\n* tgt\\_name: Basque (family)\n* train\\_date: 2020-07-26\n* src\\_alpha2: en\n* tgt\\_alpha2: euq\n* prefer\\_old: False\n* long\\_pair: eng-euq\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-fi
* source languages: en
* target languages: fi
* OPUS readme: [en-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fi/README.md)
* dataset: opus+bt-news
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-news-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.zip)
* test set translations: [opus+bt-news-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.test.txt)
* test set scores: [opus+bt-news-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fi/opus+bt-news-2020-03-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2019-enfi.en.fi | 25.7 | 0.578 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-fi
* source languages: en
* target languages: fi
* OPUS readme: en-fi
* dataset: opus+bt-news
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: opus+URL
* test set translations: opus+URL
* test set scores: opus+URL
Benchmarks
----------
testset: URL, BLEU: 25.7, chr-F: 0.578
| [
"### opus-mt-en-fi\n\n\n* source languages: en\n* target languages: fi\n* OPUS readme: en-fi\n* dataset: opus+bt-news\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.578"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-fi\n\n\n* source languages: en\n* target languages: fi\n* OPUS readme: en-fi\n* dataset: opus+bt-news\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.578"
] | [
51,
114
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-fi\n\n\n* source languages: en\n* target languages: fi\n* OPUS readme: en-fi\n* dataset: opus+bt-news\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.578"
] |
translation | transformers |
### eng-fiu
* source group: English
* target group: Finno-Ugrian languages
* OPUS readme: [eng-fiu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md)
* model: transformer
* source language(s): eng
* target language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi-engfin.eng.fin | 18.7 | 0.522 |
| newsdev2018-enet-engest.eng.est | 19.4 | 0.521 |
| newssyscomb2009-enghun.eng.hun | 15.5 | 0.472 |
| newstest2009-enghun.eng.hun | 15.4 | 0.468 |
| newstest2015-enfi-engfin.eng.fin | 19.9 | 0.532 |
| newstest2016-enfi-engfin.eng.fin | 21.1 | 0.544 |
| newstest2017-enfi-engfin.eng.fin | 23.8 | 0.567 |
| newstest2018-enet-engest.eng.est | 20.4 | 0.532 |
| newstest2018-enfi-engfin.eng.fin | 15.6 | 0.498 |
| newstest2019-enfi-engfin.eng.fin | 20.0 | 0.520 |
| newstestB2016-enfi-engfin.eng.fin | 17.0 | 0.512 |
| newstestB2017-enfi-engfin.eng.fin | 19.7 | 0.531 |
| Tatoeba-test.eng-chm.eng.chm | 0.9 | 0.115 |
| Tatoeba-test.eng-est.eng.est | 49.8 | 0.689 |
| Tatoeba-test.eng-fin.eng.fin | 34.7 | 0.597 |
| Tatoeba-test.eng-fkv.eng.fkv | 1.3 | 0.187 |
| Tatoeba-test.eng-hun.eng.hun | 35.2 | 0.589 |
| Tatoeba-test.eng-izh.eng.izh | 6.0 | 0.163 |
| Tatoeba-test.eng-kom.eng.kom | 3.4 | 0.012 |
| Tatoeba-test.eng-krl.eng.krl | 6.4 | 0.202 |
| Tatoeba-test.eng-liv.eng.liv | 1.6 | 0.102 |
| Tatoeba-test.eng-mdf.eng.mdf | 3.7 | 0.008 |
| Tatoeba-test.eng.multi | 35.4 | 0.590 |
| Tatoeba-test.eng-myv.eng.myv | 1.4 | 0.014 |
| Tatoeba-test.eng-sma.eng.sma | 2.6 | 0.097 |
| Tatoeba-test.eng-sme.eng.sme | 7.3 | 0.221 |
| Tatoeba-test.eng-udm.eng.udm | 1.4 | 0.079 |
### System Info:
- hf_name: eng-fiu
- source_languages: eng
- target_languages: fiu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fiu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'se', 'fi', 'hu', 'et', 'fiu']
- src_constituents: {'eng'}
- tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fiu/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: fiu
- short_pair: en-fiu
- chrF2_score: 0.59
- bleu: 35.4
- brevity_penalty: 0.9440000000000001
- ref_len: 59311.0
- src_name: English
- tgt_name: Finno-Ugrian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: fiu
- prefer_old: False
- long_pair: eng-fiu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "se", "fi", "hu", "et", "fiu"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-fiu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"se",
"fi",
"hu",
"et",
"fiu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"se",
"fi",
"hu",
"et",
"fiu"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #se #fi #hu #et #fiu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-fiu
* source group: English
* target group: Finno-Ugrian languages
* OPUS readme: eng-fiu
* model: transformer
* source language(s): eng
* target language(s): est fin fkv\_Latn hun izh kpv krl liv\_Latn mdf mhr myv sma sme udm vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 18.7, chr-F: 0.522
testset: URL, BLEU: 19.4, chr-F: 0.521
testset: URL, BLEU: 15.5, chr-F: 0.472
testset: URL, BLEU: 15.4, chr-F: 0.468
testset: URL, BLEU: 19.9, chr-F: 0.532
testset: URL, BLEU: 21.1, chr-F: 0.544
testset: URL, BLEU: 23.8, chr-F: 0.567
testset: URL, BLEU: 20.4, chr-F: 0.532
testset: URL, BLEU: 15.6, chr-F: 0.498
testset: URL, BLEU: 20.0, chr-F: 0.520
testset: URL, BLEU: 17.0, chr-F: 0.512
testset: URL, BLEU: 19.7, chr-F: 0.531
testset: URL, BLEU: 0.9, chr-F: 0.115
testset: URL, BLEU: 49.8, chr-F: 0.689
testset: URL, BLEU: 34.7, chr-F: 0.597
testset: URL, BLEU: 1.3, chr-F: 0.187
testset: URL, BLEU: 35.2, chr-F: 0.589
testset: URL, BLEU: 6.0, chr-F: 0.163
testset: URL, BLEU: 3.4, chr-F: 0.012
testset: URL, BLEU: 6.4, chr-F: 0.202
testset: URL, BLEU: 1.6, chr-F: 0.102
testset: URL, BLEU: 3.7, chr-F: 0.008
testset: URL, BLEU: 35.4, chr-F: 0.590
testset: URL, BLEU: 1.4, chr-F: 0.014
testset: URL, BLEU: 2.6, chr-F: 0.097
testset: URL, BLEU: 7.3, chr-F: 0.221
testset: URL, BLEU: 1.4, chr-F: 0.079
### System Info:
* hf\_name: eng-fiu
* source\_languages: eng
* target\_languages: fiu
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'se', 'fi', 'hu', 'et', 'fiu']
* src\_constituents: {'eng'}
* tgt\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\_Latn', 'est', 'mhr', 'sma'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: fiu
* short\_pair: en-fiu
* chrF2\_score: 0.59
* bleu: 35.4
* brevity\_penalty: 0.9440000000000001
* ref\_len: 59311.0
* src\_name: English
* tgt\_name: Finno-Ugrian languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: fiu
* prefer\_old: False
* long\_pair: eng-fiu
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-fiu\n\n\n* source group: English\n* target group: Finno-Ugrian languages\n* OPUS readme: eng-fiu\n* model: transformer\n* source language(s): eng\n* target language(s): est fin fkv\\_Latn hun izh kpv krl liv\\_Latn mdf mhr myv sma sme udm vro\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.7, chr-F: 0.522\ntestset: URL, BLEU: 19.4, chr-F: 0.521\ntestset: URL, BLEU: 15.5, chr-F: 0.472\ntestset: URL, BLEU: 15.4, chr-F: 0.468\ntestset: URL, BLEU: 19.9, chr-F: 0.532\ntestset: URL, BLEU: 21.1, chr-F: 0.544\ntestset: URL, BLEU: 23.8, chr-F: 0.567\ntestset: URL, BLEU: 20.4, chr-F: 0.532\ntestset: URL, BLEU: 15.6, chr-F: 0.498\ntestset: URL, BLEU: 20.0, chr-F: 0.520\ntestset: URL, BLEU: 17.0, chr-F: 0.512\ntestset: URL, BLEU: 19.7, chr-F: 0.531\ntestset: URL, BLEU: 0.9, chr-F: 0.115\ntestset: URL, BLEU: 49.8, chr-F: 0.689\ntestset: URL, BLEU: 34.7, chr-F: 0.597\ntestset: URL, BLEU: 1.3, chr-F: 0.187\ntestset: URL, BLEU: 35.2, chr-F: 0.589\ntestset: URL, BLEU: 6.0, chr-F: 0.163\ntestset: URL, BLEU: 3.4, chr-F: 0.012\ntestset: URL, BLEU: 6.4, chr-F: 0.202\ntestset: URL, BLEU: 1.6, chr-F: 0.102\ntestset: URL, BLEU: 3.7, chr-F: 0.008\ntestset: URL, BLEU: 35.4, chr-F: 0.590\ntestset: URL, BLEU: 1.4, chr-F: 0.014\ntestset: URL, BLEU: 2.6, chr-F: 0.097\ntestset: URL, BLEU: 7.3, chr-F: 0.221\ntestset: URL, BLEU: 1.4, chr-F: 0.079",
"### System Info:\n\n\n* hf\\_name: eng-fiu\n* source\\_languages: eng\n* target\\_languages: fiu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'se', 'fi', 'hu', 'et', 'fiu']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: fiu\n* short\\_pair: en-fiu\n* chrF2\\_score: 0.59\n* bleu: 35.4\n* brevity\\_penalty: 0.9440000000000001\n* ref\\_len: 59311.0\n* src\\_name: English\n* tgt\\_name: Finno-Ugrian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: fiu\n* prefer\\_old: False\n* long\\_pair: eng-fiu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #se #fi #hu #et #fiu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-fiu\n\n\n* source group: English\n* target group: Finno-Ugrian languages\n* OPUS readme: eng-fiu\n* model: transformer\n* source language(s): eng\n* target language(s): est fin fkv\\_Latn hun izh kpv krl liv\\_Latn mdf mhr myv sma sme udm vro\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.7, chr-F: 0.522\ntestset: URL, BLEU: 19.4, chr-F: 0.521\ntestset: URL, BLEU: 15.5, chr-F: 0.472\ntestset: URL, BLEU: 15.4, chr-F: 0.468\ntestset: URL, BLEU: 19.9, chr-F: 0.532\ntestset: URL, BLEU: 21.1, chr-F: 0.544\ntestset: URL, BLEU: 23.8, chr-F: 0.567\ntestset: URL, BLEU: 20.4, chr-F: 0.532\ntestset: URL, BLEU: 15.6, chr-F: 0.498\ntestset: URL, BLEU: 20.0, chr-F: 0.520\ntestset: URL, BLEU: 17.0, chr-F: 0.512\ntestset: URL, BLEU: 19.7, chr-F: 0.531\ntestset: URL, BLEU: 0.9, chr-F: 0.115\ntestset: URL, BLEU: 49.8, chr-F: 0.689\ntestset: URL, BLEU: 34.7, chr-F: 0.597\ntestset: URL, BLEU: 1.3, chr-F: 0.187\ntestset: URL, BLEU: 35.2, chr-F: 0.589\ntestset: URL, BLEU: 6.0, chr-F: 0.163\ntestset: URL, BLEU: 3.4, chr-F: 0.012\ntestset: URL, BLEU: 6.4, chr-F: 0.202\ntestset: URL, BLEU: 1.6, chr-F: 0.102\ntestset: URL, BLEU: 3.7, chr-F: 0.008\ntestset: URL, BLEU: 35.4, chr-F: 0.590\ntestset: URL, BLEU: 1.4, chr-F: 0.014\ntestset: URL, BLEU: 2.6, chr-F: 0.097\ntestset: URL, BLEU: 7.3, chr-F: 0.221\ntestset: URL, BLEU: 1.4, chr-F: 0.079",
"### System Info:\n\n\n* hf\\_name: eng-fiu\n* source\\_languages: eng\n* target\\_languages: fiu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'se', 'fi', 'hu', 'et', 'fiu']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: fiu\n* short\\_pair: en-fiu\n* chrF2\\_score: 0.59\n* bleu: 35.4\n* brevity\\_penalty: 0.9440000000000001\n* ref\\_len: 59311.0\n* src\\_name: English\n* tgt\\_name: Finno-Ugrian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: fiu\n* prefer\\_old: False\n* long\\_pair: eng-fiu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
60,
787,
510
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #se #fi #hu #et #fiu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-fiu\n\n\n* source group: English\n* target group: Finno-Ugrian languages\n* OPUS readme: eng-fiu\n* model: transformer\n* source language(s): eng\n* target language(s): est fin fkv\\_Latn hun izh kpv krl liv\\_Latn mdf mhr myv sma sme udm vro\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.7, chr-F: 0.522\ntestset: URL, BLEU: 19.4, chr-F: 0.521\ntestset: URL, BLEU: 15.5, chr-F: 0.472\ntestset: URL, BLEU: 15.4, chr-F: 0.468\ntestset: URL, BLEU: 19.9, chr-F: 0.532\ntestset: URL, BLEU: 21.1, chr-F: 0.544\ntestset: URL, BLEU: 23.8, chr-F: 0.567\ntestset: URL, BLEU: 20.4, chr-F: 0.532\ntestset: URL, BLEU: 15.6, chr-F: 0.498\ntestset: URL, BLEU: 20.0, chr-F: 0.520\ntestset: URL, BLEU: 17.0, chr-F: 0.512\ntestset: URL, BLEU: 19.7, chr-F: 0.531\ntestset: URL, BLEU: 0.9, chr-F: 0.115\ntestset: URL, BLEU: 49.8, chr-F: 0.689\ntestset: URL, BLEU: 34.7, chr-F: 0.597\ntestset: URL, BLEU: 1.3, chr-F: 0.187\ntestset: URL, BLEU: 35.2, chr-F: 0.589\ntestset: URL, BLEU: 6.0, chr-F: 0.163\ntestset: URL, BLEU: 3.4, chr-F: 0.012\ntestset: URL, BLEU: 6.4, chr-F: 0.202\ntestset: URL, BLEU: 1.6, chr-F: 0.102\ntestset: URL, BLEU: 3.7, chr-F: 0.008\ntestset: URL, BLEU: 35.4, chr-F: 0.590\ntestset: URL, BLEU: 1.4, chr-F: 0.014\ntestset: URL, BLEU: 2.6, chr-F: 0.097\ntestset: URL, BLEU: 7.3, chr-F: 0.221\ntestset: URL, BLEU: 1.4, chr-F: 0.079### System Info:\n\n\n* hf\\_name: eng-fiu\n* source\\_languages: eng\n* target\\_languages: fiu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'se', 'fi', 'hu', 'et', 'fiu']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: fiu\n* short\\_pair: en-fiu\n* chrF2\\_score: 0.59\n* bleu: 35.4\n* brevity\\_penalty: 0.9440000000000001\n* ref\\_len: 59311.0\n* src\\_name: English\n* tgt\\_name: Finno-Ugrian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: fiu\n* prefer\\_old: False\n* long\\_pair: eng-fiu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-fj
* source languages: en
* target languages: fj
* OPUS readme: [en-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fj/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.fj | 34.0 | 0.561 |
| Tatoeba.en.fj | 62.5 | 0.781 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-fj | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"fj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #fj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-fj
* source languages: en
* target languages: fj
* OPUS readme: en-fj
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.0, chr-F: 0.561
testset: URL, BLEU: 62.5, chr-F: 0.781
| [
"### opus-mt-en-fj\n\n\n* source languages: en\n* target languages: fj\n* OPUS readme: en-fj\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.0, chr-F: 0.561\ntestset: URL, BLEU: 62.5, chr-F: 0.781"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #fj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-fj\n\n\n* source languages: en\n* target languages: fj\n* OPUS readme: en-fj\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.0, chr-F: 0.561\ntestset: URL, BLEU: 62.5, chr-F: 0.781"
] | [
52,
132
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #fj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-fj\n\n\n* source languages: en\n* target languages: fj\n* OPUS readme: en-fj\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.0, chr-F: 0.561\ntestset: URL, BLEU: 62.5, chr-F: 0.781"
] |
translation | transformers |
### opus-mt-en-fr
* source languages: en
* target languages: fr
* OPUS readme: [en-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-fr/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdiscussdev2015-enfr.en.fr | 33.8 | 0.602 |
| newsdiscusstest2015-enfr.en.fr | 40.0 | 0.643 |
| newssyscomb2009.en.fr | 29.8 | 0.584 |
| news-test2008.en.fr | 27.5 | 0.554 |
| newstest2009.en.fr | 29.4 | 0.577 |
| newstest2010.en.fr | 32.7 | 0.596 |
| newstest2011.en.fr | 34.3 | 0.611 |
| newstest2012.en.fr | 31.8 | 0.592 |
| newstest2013.en.fr | 33.2 | 0.589 |
| Tatoeba.en.fr | 50.5 | 0.672 | | {"license": "apache-2.0", "pipeline_tag": "translation"} | Helsinki-NLP/opus-mt-en-fr | null | [
"transformers",
"pytorch",
"tf",
"jax",
"marian",
"text2text-generation",
"translation",
"en",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #jax #marian #text2text-generation #translation #en #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-fr
* source languages: en
* target languages: fr
* OPUS readme: en-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.8, chr-F: 0.602
testset: URL, BLEU: 40.0, chr-F: 0.643
testset: URL, BLEU: 29.8, chr-F: 0.584
testset: URL, BLEU: 27.5, chr-F: 0.554
testset: URL, BLEU: 29.4, chr-F: 0.577
testset: URL, BLEU: 32.7, chr-F: 0.596
testset: URL, BLEU: 34.3, chr-F: 0.611
testset: URL, BLEU: 31.8, chr-F: 0.592
testset: URL, BLEU: 33.2, chr-F: 0.589
testset: URL, BLEU: 50.5, chr-F: 0.672
| [
"### opus-mt-en-fr\n\n\n* source languages: en\n* target languages: fr\n* OPUS readme: en-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.8, chr-F: 0.602\ntestset: URL, BLEU: 40.0, chr-F: 0.643\ntestset: URL, BLEU: 29.8, chr-F: 0.584\ntestset: URL, BLEU: 27.5, chr-F: 0.554\ntestset: URL, BLEU: 29.4, chr-F: 0.577\ntestset: URL, BLEU: 32.7, chr-F: 0.596\ntestset: URL, BLEU: 34.3, chr-F: 0.611\ntestset: URL, BLEU: 31.8, chr-F: 0.592\ntestset: URL, BLEU: 33.2, chr-F: 0.589\ntestset: URL, BLEU: 50.5, chr-F: 0.672"
] | [
"TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #en #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-fr\n\n\n* source languages: en\n* target languages: fr\n* OPUS readme: en-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.8, chr-F: 0.602\ntestset: URL, BLEU: 40.0, chr-F: 0.643\ntestset: URL, BLEU: 29.8, chr-F: 0.584\ntestset: URL, BLEU: 27.5, chr-F: 0.554\ntestset: URL, BLEU: 29.4, chr-F: 0.577\ntestset: URL, BLEU: 32.7, chr-F: 0.596\ntestset: URL, BLEU: 34.3, chr-F: 0.611\ntestset: URL, BLEU: 31.8, chr-F: 0.592\ntestset: URL, BLEU: 33.2, chr-F: 0.589\ntestset: URL, BLEU: 50.5, chr-F: 0.672"
] | [
53,
313
] | [
"TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #en #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-fr\n\n\n* source languages: en\n* target languages: fr\n* OPUS readme: en-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.8, chr-F: 0.602\ntestset: URL, BLEU: 40.0, chr-F: 0.643\ntestset: URL, BLEU: 29.8, chr-F: 0.584\ntestset: URL, BLEU: 27.5, chr-F: 0.554\ntestset: URL, BLEU: 29.4, chr-F: 0.577\ntestset: URL, BLEU: 32.7, chr-F: 0.596\ntestset: URL, BLEU: 34.3, chr-F: 0.611\ntestset: URL, BLEU: 31.8, chr-F: 0.592\ntestset: URL, BLEU: 33.2, chr-F: 0.589\ntestset: URL, BLEU: 50.5, chr-F: 0.672"
] |
translation | transformers |
### eng-gle
* source group: English
* target group: Irish
* OPUS readme: [eng-gle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): gle
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.gle | 37.5 | 0.593 |
### System Info:
- hf_name: eng-gle
- source_languages: eng
- target_languages: gle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ga']
- src_constituents: {'eng'}
- tgt_constituents: {'gle'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gle/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: gle
- short_pair: en-ga
- chrF2_score: 0.593
- bleu: 37.5
- brevity_penalty: 1.0
- ref_len: 12200.0
- src_name: English
- tgt_name: Irish
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: ga
- prefer_old: False
- long_pair: eng-gle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "ga"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ga | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ga",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ga"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ga #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-gle
* source group: English
* target group: Irish
* OPUS readme: eng-gle
* model: transformer-align
* source language(s): eng
* target language(s): gle
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.5, chr-F: 0.593
### System Info:
* hf\_name: eng-gle
* source\_languages: eng
* target\_languages: gle
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'ga']
* src\_constituents: {'eng'}
* tgt\_constituents: {'gle'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: gle
* short\_pair: en-ga
* chrF2\_score: 0.593
* bleu: 37.5
* brevity\_penalty: 1.0
* ref\_len: 12200.0
* src\_name: English
* tgt\_name: Irish
* train\_date: 2020-06-17
* src\_alpha2: en
* tgt\_alpha2: ga
* prefer\_old: False
* long\_pair: eng-gle
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-gle\n\n\n* source group: English\n* target group: Irish\n* OPUS readme: eng-gle\n* model: transformer-align\n* source language(s): eng\n* target language(s): gle\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.5, chr-F: 0.593",
"### System Info:\n\n\n* hf\\_name: eng-gle\n* source\\_languages: eng\n* target\\_languages: gle\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ga']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'gle'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gle\n* short\\_pair: en-ga\n* chrF2\\_score: 0.593\n* bleu: 37.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 12200.0\n* src\\_name: English\n* tgt\\_name: Irish\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: ga\n* prefer\\_old: False\n* long\\_pair: eng-gle\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ga #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-gle\n\n\n* source group: English\n* target group: Irish\n* OPUS readme: eng-gle\n* model: transformer-align\n* source language(s): eng\n* target language(s): gle\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.5, chr-F: 0.593",
"### System Info:\n\n\n* hf\\_name: eng-gle\n* source\\_languages: eng\n* target\\_languages: gle\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ga']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'gle'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gle\n* short\\_pair: en-ga\n* chrF2\\_score: 0.593\n* bleu: 37.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 12200.0\n* src\\_name: English\n* tgt\\_name: Irish\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: ga\n* prefer\\_old: False\n* long\\_pair: eng-gle\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
395
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ga #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-gle\n\n\n* source group: English\n* target group: Irish\n* OPUS readme: eng-gle\n* model: transformer-align\n* source language(s): eng\n* target language(s): gle\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.5, chr-F: 0.593### System Info:\n\n\n* hf\\_name: eng-gle\n* source\\_languages: eng\n* target\\_languages: gle\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ga']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'gle'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gle\n* short\\_pair: en-ga\n* chrF2\\_score: 0.593\n* bleu: 37.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 12200.0\n* src\\_name: English\n* tgt\\_name: Irish\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: ga\n* prefer\\_old: False\n* long\\_pair: eng-gle\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-gaa
* source languages: en
* target languages: gaa
* OPUS readme: [en-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gaa/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.gaa | 39.9 | 0.593 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-gaa | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"gaa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #gaa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-gaa
* source languages: en
* target languages: gaa
* OPUS readme: en-gaa
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.9, chr-F: 0.593
| [
"### opus-mt-en-gaa\n\n\n* source languages: en\n* target languages: gaa\n* OPUS readme: en-gaa\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.9, chr-F: 0.593"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gaa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-gaa\n\n\n* source languages: en\n* target languages: gaa\n* OPUS readme: en-gaa\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.9, chr-F: 0.593"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gaa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-gaa\n\n\n* source languages: en\n* target languages: gaa\n* OPUS readme: en-gaa\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.9, chr-F: 0.593"
] |
translation | transformers |
### eng-gem
* source group: English
* target group: Germanic languages
* OPUS readme: [eng-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md)
* model: transformer
* source language(s): eng
* target language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engdeu.eng.deu | 20.9 | 0.521 |
| news-test2008-engdeu.eng.deu | 21.1 | 0.511 |
| newstest2009-engdeu.eng.deu | 20.5 | 0.516 |
| newstest2010-engdeu.eng.deu | 22.5 | 0.526 |
| newstest2011-engdeu.eng.deu | 20.5 | 0.508 |
| newstest2012-engdeu.eng.deu | 20.8 | 0.507 |
| newstest2013-engdeu.eng.deu | 24.6 | 0.534 |
| newstest2015-ende-engdeu.eng.deu | 27.9 | 0.569 |
| newstest2016-ende-engdeu.eng.deu | 33.2 | 0.607 |
| newstest2017-ende-engdeu.eng.deu | 26.5 | 0.560 |
| newstest2018-ende-engdeu.eng.deu | 39.4 | 0.648 |
| newstest2019-ende-engdeu.eng.deu | 35.0 | 0.613 |
| Tatoeba-test.eng-afr.eng.afr | 56.5 | 0.745 |
| Tatoeba-test.eng-ang.eng.ang | 6.7 | 0.154 |
| Tatoeba-test.eng-dan.eng.dan | 58.0 | 0.726 |
| Tatoeba-test.eng-deu.eng.deu | 40.3 | 0.615 |
| Tatoeba-test.eng-enm.eng.enm | 1.4 | 0.215 |
| Tatoeba-test.eng-fao.eng.fao | 7.2 | 0.304 |
| Tatoeba-test.eng-frr.eng.frr | 5.5 | 0.159 |
| Tatoeba-test.eng-fry.eng.fry | 19.4 | 0.433 |
| Tatoeba-test.eng-gos.eng.gos | 1.0 | 0.182 |
| Tatoeba-test.eng-got.eng.got | 0.3 | 0.012 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.130 |
| Tatoeba-test.eng-isl.eng.isl | 23.4 | 0.505 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.1 | 0.141 |
| Tatoeba-test.eng-ltz.eng.ltz | 20.3 | 0.379 |
| Tatoeba-test.eng.multi | 46.5 | 0.641 |
| Tatoeba-test.eng-nds.eng.nds | 20.6 | 0.458 |
| Tatoeba-test.eng-nld.eng.nld | 53.4 | 0.702 |
| Tatoeba-test.eng-non.eng.non | 0.6 | 0.166 |
| Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.679 |
| Tatoeba-test.eng-pdc.eng.pdc | 3.9 | 0.189 |
| Tatoeba-test.eng-sco.eng.sco | 33.0 | 0.542 |
| Tatoeba-test.eng-stq.eng.stq | 2.3 | 0.274 |
| Tatoeba-test.eng-swe.eng.swe | 57.9 | 0.719 |
| Tatoeba-test.eng-swg.eng.swg | 1.2 | 0.171 |
| Tatoeba-test.eng-yid.eng.yid | 7.2 | 0.304 |
### System Info:
- hf_name: eng-gem
- source_languages: eng
- target_languages: gem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem']
- src_constituents: {'eng'}
- tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gem/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: gem
- short_pair: en-gem
- chrF2_score: 0.6409999999999999
- bleu: 46.5
- brevity_penalty: 0.9790000000000001
- ref_len: 73328.0
- src_name: English
- tgt_name: Germanic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: gem
- prefer_old: False
- long_pair: eng-gem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "da", "sv", "af", "nn", "fy", "fo", "de", "nb", "nl", "is", "lb", "yi", "gem"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-gem | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"lb",
"yi",
"gem",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"lb",
"yi",
"gem"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #da #sv #af #nn #fy #fo #de #nb #nl #is #lb #yi #gem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-gem
* source group: English
* target group: Germanic languages
* OPUS readme: eng-gem
* model: transformer
* source language(s): eng
* target language(s): afr ang\_Latn dan deu enm\_Latn fao frr fry gos got\_Goth gsw isl ksh ltz nds nld nno nob nob\_Hebr non\_Latn pdc sco stq swe swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.9, chr-F: 0.521
testset: URL, BLEU: 21.1, chr-F: 0.511
testset: URL, BLEU: 20.5, chr-F: 0.516
testset: URL, BLEU: 22.5, chr-F: 0.526
testset: URL, BLEU: 20.5, chr-F: 0.508
testset: URL, BLEU: 20.8, chr-F: 0.507
testset: URL, BLEU: 24.6, chr-F: 0.534
testset: URL, BLEU: 27.9, chr-F: 0.569
testset: URL, BLEU: 33.2, chr-F: 0.607
testset: URL, BLEU: 26.5, chr-F: 0.560
testset: URL, BLEU: 39.4, chr-F: 0.648
testset: URL, BLEU: 35.0, chr-F: 0.613
testset: URL, BLEU: 56.5, chr-F: 0.745
testset: URL, BLEU: 6.7, chr-F: 0.154
testset: URL, BLEU: 58.0, chr-F: 0.726
testset: URL, BLEU: 40.3, chr-F: 0.615
testset: URL, BLEU: 1.4, chr-F: 0.215
testset: URL, BLEU: 7.2, chr-F: 0.304
testset: URL, BLEU: 5.5, chr-F: 0.159
testset: URL, BLEU: 19.4, chr-F: 0.433
testset: URL, BLEU: 1.0, chr-F: 0.182
testset: URL, BLEU: 0.3, chr-F: 0.012
testset: URL, BLEU: 0.9, chr-F: 0.130
testset: URL, BLEU: 23.4, chr-F: 0.505
testset: URL, BLEU: 1.1, chr-F: 0.141
testset: URL, BLEU: 20.3, chr-F: 0.379
testset: URL, BLEU: 46.5, chr-F: 0.641
testset: URL, BLEU: 20.6, chr-F: 0.458
testset: URL, BLEU: 53.4, chr-F: 0.702
testset: URL, BLEU: 0.6, chr-F: 0.166
testset: URL, BLEU: 50.3, chr-F: 0.679
testset: URL, BLEU: 3.9, chr-F: 0.189
testset: URL, BLEU: 33.0, chr-F: 0.542
testset: URL, BLEU: 2.3, chr-F: 0.274
testset: URL, BLEU: 57.9, chr-F: 0.719
testset: URL, BLEU: 1.2, chr-F: 0.171
testset: URL, BLEU: 7.2, chr-F: 0.304
### System Info:
* hf\_name: eng-gem
* source\_languages: eng
* target\_languages: gem
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem']
* src\_constituents: {'eng'}
* tgt\_constituents: {'ksh', 'enm\_Latn', 'got\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\_Hebr', 'ang\_Latn', 'frr', 'non\_Latn', 'yid', 'nds'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: gem
* short\_pair: en-gem
* chrF2\_score: 0.6409999999999999
* bleu: 46.5
* brevity\_penalty: 0.9790000000000001
* ref\_len: 73328.0
* src\_name: English
* tgt\_name: Germanic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: gem
* prefer\_old: False
* long\_pair: eng-gem
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-gem\n\n\n* source group: English\n* target group: Germanic languages\n* OPUS readme: eng-gem\n* model: transformer\n* source language(s): eng\n* target language(s): afr ang\\_Latn dan deu enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.9, chr-F: 0.521\ntestset: URL, BLEU: 21.1, chr-F: 0.511\ntestset: URL, BLEU: 20.5, chr-F: 0.516\ntestset: URL, BLEU: 22.5, chr-F: 0.526\ntestset: URL, BLEU: 20.5, chr-F: 0.508\ntestset: URL, BLEU: 20.8, chr-F: 0.507\ntestset: URL, BLEU: 24.6, chr-F: 0.534\ntestset: URL, BLEU: 27.9, chr-F: 0.569\ntestset: URL, BLEU: 33.2, chr-F: 0.607\ntestset: URL, BLEU: 26.5, chr-F: 0.560\ntestset: URL, BLEU: 39.4, chr-F: 0.648\ntestset: URL, BLEU: 35.0, chr-F: 0.613\ntestset: URL, BLEU: 56.5, chr-F: 0.745\ntestset: URL, BLEU: 6.7, chr-F: 0.154\ntestset: URL, BLEU: 58.0, chr-F: 0.726\ntestset: URL, BLEU: 40.3, chr-F: 0.615\ntestset: URL, BLEU: 1.4, chr-F: 0.215\ntestset: URL, BLEU: 7.2, chr-F: 0.304\ntestset: URL, BLEU: 5.5, chr-F: 0.159\ntestset: URL, BLEU: 19.4, chr-F: 0.433\ntestset: URL, BLEU: 1.0, chr-F: 0.182\ntestset: URL, BLEU: 0.3, chr-F: 0.012\ntestset: URL, BLEU: 0.9, chr-F: 0.130\ntestset: URL, BLEU: 23.4, chr-F: 0.505\ntestset: URL, BLEU: 1.1, chr-F: 0.141\ntestset: URL, BLEU: 20.3, chr-F: 0.379\ntestset: URL, BLEU: 46.5, chr-F: 0.641\ntestset: URL, BLEU: 20.6, chr-F: 0.458\ntestset: URL, BLEU: 53.4, chr-F: 0.702\ntestset: URL, BLEU: 0.6, chr-F: 0.166\ntestset: URL, BLEU: 50.3, chr-F: 0.679\ntestset: URL, BLEU: 3.9, chr-F: 0.189\ntestset: URL, BLEU: 33.0, chr-F: 0.542\ntestset: URL, BLEU: 2.3, chr-F: 0.274\ntestset: URL, BLEU: 57.9, chr-F: 0.719\ntestset: URL, BLEU: 1.2, chr-F: 0.171\ntestset: URL, BLEU: 7.2, chr-F: 0.304",
"### System Info:\n\n\n* hf\\_name: eng-gem\n* source\\_languages: eng\n* target\\_languages: gem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gem\n* short\\_pair: en-gem\n* chrF2\\_score: 0.6409999999999999\n* bleu: 46.5\n* brevity\\_penalty: 0.9790000000000001\n* ref\\_len: 73328.0\n* src\\_name: English\n* tgt\\_name: Germanic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: gem\n* prefer\\_old: False\n* long\\_pair: eng-gem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #da #sv #af #nn #fy #fo #de #nb #nl #is #lb #yi #gem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-gem\n\n\n* source group: English\n* target group: Germanic languages\n* OPUS readme: eng-gem\n* model: transformer\n* source language(s): eng\n* target language(s): afr ang\\_Latn dan deu enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.9, chr-F: 0.521\ntestset: URL, BLEU: 21.1, chr-F: 0.511\ntestset: URL, BLEU: 20.5, chr-F: 0.516\ntestset: URL, BLEU: 22.5, chr-F: 0.526\ntestset: URL, BLEU: 20.5, chr-F: 0.508\ntestset: URL, BLEU: 20.8, chr-F: 0.507\ntestset: URL, BLEU: 24.6, chr-F: 0.534\ntestset: URL, BLEU: 27.9, chr-F: 0.569\ntestset: URL, BLEU: 33.2, chr-F: 0.607\ntestset: URL, BLEU: 26.5, chr-F: 0.560\ntestset: URL, BLEU: 39.4, chr-F: 0.648\ntestset: URL, BLEU: 35.0, chr-F: 0.613\ntestset: URL, BLEU: 56.5, chr-F: 0.745\ntestset: URL, BLEU: 6.7, chr-F: 0.154\ntestset: URL, BLEU: 58.0, chr-F: 0.726\ntestset: URL, BLEU: 40.3, chr-F: 0.615\ntestset: URL, BLEU: 1.4, chr-F: 0.215\ntestset: URL, BLEU: 7.2, chr-F: 0.304\ntestset: URL, BLEU: 5.5, chr-F: 0.159\ntestset: URL, BLEU: 19.4, chr-F: 0.433\ntestset: URL, BLEU: 1.0, chr-F: 0.182\ntestset: URL, BLEU: 0.3, chr-F: 0.012\ntestset: URL, BLEU: 0.9, chr-F: 0.130\ntestset: URL, BLEU: 23.4, chr-F: 0.505\ntestset: URL, BLEU: 1.1, chr-F: 0.141\ntestset: URL, BLEU: 20.3, chr-F: 0.379\ntestset: URL, BLEU: 46.5, chr-F: 0.641\ntestset: URL, BLEU: 20.6, chr-F: 0.458\ntestset: URL, BLEU: 53.4, chr-F: 0.702\ntestset: URL, BLEU: 0.6, chr-F: 0.166\ntestset: URL, BLEU: 50.3, chr-F: 0.679\ntestset: URL, BLEU: 3.9, chr-F: 0.189\ntestset: URL, BLEU: 33.0, chr-F: 0.542\ntestset: URL, BLEU: 2.3, chr-F: 0.274\ntestset: URL, BLEU: 57.9, chr-F: 0.719\ntestset: URL, BLEU: 1.2, chr-F: 0.171\ntestset: URL, BLEU: 7.2, chr-F: 0.304",
"### System Info:\n\n\n* hf\\_name: eng-gem\n* source\\_languages: eng\n* target\\_languages: gem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gem\n* short\\_pair: en-gem\n* chrF2\\_score: 0.6409999999999999\n* bleu: 46.5\n* brevity\\_penalty: 0.9790000000000001\n* ref\\_len: 73328.0\n* src\\_name: English\n* tgt\\_name: Germanic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: gem\n* prefer\\_old: False\n* long\\_pair: eng-gem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
79,
1036,
610
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #da #sv #af #nn #fy #fo #de #nb #nl #is #lb #yi #gem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-gem\n\n\n* source group: English\n* target group: Germanic languages\n* OPUS readme: eng-gem\n* model: transformer\n* source language(s): eng\n* target language(s): afr ang\\_Latn dan deu enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.9, chr-F: 0.521\ntestset: URL, BLEU: 21.1, chr-F: 0.511\ntestset: URL, BLEU: 20.5, chr-F: 0.516\ntestset: URL, BLEU: 22.5, chr-F: 0.526\ntestset: URL, BLEU: 20.5, chr-F: 0.508\ntestset: URL, BLEU: 20.8, chr-F: 0.507\ntestset: URL, BLEU: 24.6, chr-F: 0.534\ntestset: URL, BLEU: 27.9, chr-F: 0.569\ntestset: URL, BLEU: 33.2, chr-F: 0.607\ntestset: URL, BLEU: 26.5, chr-F: 0.560\ntestset: URL, BLEU: 39.4, chr-F: 0.648\ntestset: URL, BLEU: 35.0, chr-F: 0.613\ntestset: URL, BLEU: 56.5, chr-F: 0.745\ntestset: URL, BLEU: 6.7, chr-F: 0.154\ntestset: URL, BLEU: 58.0, chr-F: 0.726\ntestset: URL, BLEU: 40.3, chr-F: 0.615\ntestset: URL, BLEU: 1.4, chr-F: 0.215\ntestset: URL, BLEU: 7.2, chr-F: 0.304\ntestset: URL, BLEU: 5.5, chr-F: 0.159\ntestset: URL, BLEU: 19.4, chr-F: 0.433\ntestset: URL, BLEU: 1.0, chr-F: 0.182\ntestset: URL, BLEU: 0.3, chr-F: 0.012\ntestset: URL, BLEU: 0.9, chr-F: 0.130\ntestset: URL, BLEU: 23.4, chr-F: 0.505\ntestset: URL, BLEU: 1.1, chr-F: 0.141\ntestset: URL, BLEU: 20.3, chr-F: 0.379\ntestset: URL, BLEU: 46.5, chr-F: 0.641\ntestset: URL, BLEU: 20.6, chr-F: 0.458\ntestset: URL, BLEU: 53.4, chr-F: 0.702\ntestset: URL, BLEU: 0.6, chr-F: 0.166\ntestset: URL, BLEU: 50.3, chr-F: 0.679\ntestset: URL, BLEU: 3.9, chr-F: 0.189\ntestset: URL, BLEU: 33.0, chr-F: 0.542\ntestset: URL, BLEU: 2.3, chr-F: 0.274\ntestset: URL, BLEU: 57.9, chr-F: 0.719\ntestset: URL, BLEU: 1.2, chr-F: 0.171\ntestset: URL, BLEU: 7.2, chr-F: 0.304### System Info:\n\n\n* hf\\_name: eng-gem\n* source\\_languages: eng\n* target\\_languages: gem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'lb', 'yi', 'gem']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gem\n* short\\_pair: en-gem\n* chrF2\\_score: 0.6409999999999999\n* bleu: 46.5\n* brevity\\_penalty: 0.9790000000000001\n* ref\\_len: 73328.0\n* src\\_name: English\n* tgt\\_name: Germanic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: gem\n* prefer\\_old: False\n* long\\_pair: eng-gem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-gil
* source languages: en
* target languages: gil
* OPUS readme: [en-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.gil | 38.8 | 0.604 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-gil | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"gil",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #gil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-gil
* source languages: en
* target languages: gil
* OPUS readme: en-gil
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.8, chr-F: 0.604
| [
"### opus-mt-en-gil\n\n\n* source languages: en\n* target languages: gil\n* OPUS readme: en-gil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.8, chr-F: 0.604"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-gil\n\n\n* source languages: en\n* target languages: gil\n* OPUS readme: en-gil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.8, chr-F: 0.604"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-gil\n\n\n* source languages: en\n* target languages: gil\n* OPUS readme: en-gil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.8, chr-F: 0.604"
] |
translation | transformers |
### opus-mt-en-gl
* source languages: en
* target languages: gl
* OPUS readme: [en-gl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gl/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.gl | 36.4 | 0.572 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-gl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"gl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #gl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-gl
* source languages: en
* target languages: gl
* OPUS readme: en-gl
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.4, chr-F: 0.572
| [
"### opus-mt-en-gl\n\n\n* source languages: en\n* target languages: gl\n* OPUS readme: en-gl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.4, chr-F: 0.572"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-gl\n\n\n* source languages: en\n* target languages: gl\n* OPUS readme: en-gl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.4, chr-F: 0.572"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-gl\n\n\n* source languages: en\n* target languages: gl\n* OPUS readme: en-gl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.4, chr-F: 0.572"
] |
translation | transformers |
### eng-gmq
* source group: English
* target group: North Germanic languages
* OPUS readme: [eng-gmq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md)
* model: transformer
* source language(s): eng
* target language(s): dan fao isl nno nob nob_Hebr non_Latn swe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-dan.eng.dan | 57.7 | 0.724 |
| Tatoeba-test.eng-fao.eng.fao | 9.2 | 0.322 |
| Tatoeba-test.eng-isl.eng.isl | 23.8 | 0.506 |
| Tatoeba-test.eng.multi | 52.8 | 0.688 |
| Tatoeba-test.eng-non.eng.non | 0.7 | 0.196 |
| Tatoeba-test.eng-nor.eng.nor | 50.3 | 0.678 |
| Tatoeba-test.eng-swe.eng.swe | 57.8 | 0.717 |
### System Info:
- hf_name: eng-gmq
- source_languages: eng
- target_languages: gmq
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmq/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']
- src_constituents: {'eng'}
- tgt_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmq/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: gmq
- short_pair: en-gmq
- chrF2_score: 0.688
- bleu: 52.8
- brevity_penalty: 0.973
- ref_len: 71881.0
- src_name: English
- tgt_name: North Germanic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: gmq
- prefer_old: False
- long_pair: eng-gmq
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "da", "nb", "sv", "is", "nn", "fo", "gmq"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-gmq | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #da #nb #sv #is #nn #fo #gmq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-gmq
* source group: English
* target group: North Germanic languages
* OPUS readme: eng-gmq
* model: transformer
* source language(s): eng
* target language(s): dan fao isl nno nob nob\_Hebr non\_Latn swe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 57.7, chr-F: 0.724
testset: URL, BLEU: 9.2, chr-F: 0.322
testset: URL, BLEU: 23.8, chr-F: 0.506
testset: URL, BLEU: 52.8, chr-F: 0.688
testset: URL, BLEU: 0.7, chr-F: 0.196
testset: URL, BLEU: 50.3, chr-F: 0.678
testset: URL, BLEU: 57.8, chr-F: 0.717
### System Info:
* hf\_name: eng-gmq
* source\_languages: eng
* target\_languages: gmq
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']
* src\_constituents: {'eng'}
* tgt\_constituents: {'dan', 'nob', 'nob\_Hebr', 'swe', 'isl', 'nno', 'non\_Latn', 'fao'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: gmq
* short\_pair: en-gmq
* chrF2\_score: 0.688
* bleu: 52.8
* brevity\_penalty: 0.973
* ref\_len: 71881.0
* src\_name: English
* tgt\_name: North Germanic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: gmq
* prefer\_old: False
* long\_pair: eng-gmq
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-gmq\n\n\n* source group: English\n* target group: North Germanic languages\n* OPUS readme: eng-gmq\n* model: transformer\n* source language(s): eng\n* target language(s): dan fao isl nno nob nob\\_Hebr non\\_Latn swe\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.7, chr-F: 0.724\ntestset: URL, BLEU: 9.2, chr-F: 0.322\ntestset: URL, BLEU: 23.8, chr-F: 0.506\ntestset: URL, BLEU: 52.8, chr-F: 0.688\ntestset: URL, BLEU: 0.7, chr-F: 0.196\ntestset: URL, BLEU: 50.3, chr-F: 0.678\ntestset: URL, BLEU: 57.8, chr-F: 0.717",
"### System Info:\n\n\n* hf\\_name: eng-gmq\n* source\\_languages: eng\n* target\\_languages: gmq\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gmq\n* short\\_pair: en-gmq\n* chrF2\\_score: 0.688\n* bleu: 52.8\n* brevity\\_penalty: 0.973\n* ref\\_len: 71881.0\n* src\\_name: English\n* tgt\\_name: North Germanic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: gmq\n* prefer\\_old: False\n* long\\_pair: eng-gmq\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #da #nb #sv #is #nn #fo #gmq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-gmq\n\n\n* source group: English\n* target group: North Germanic languages\n* OPUS readme: eng-gmq\n* model: transformer\n* source language(s): eng\n* target language(s): dan fao isl nno nob nob\\_Hebr non\\_Latn swe\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.7, chr-F: 0.724\ntestset: URL, BLEU: 9.2, chr-F: 0.322\ntestset: URL, BLEU: 23.8, chr-F: 0.506\ntestset: URL, BLEU: 52.8, chr-F: 0.688\ntestset: URL, BLEU: 0.7, chr-F: 0.196\ntestset: URL, BLEU: 50.3, chr-F: 0.678\ntestset: URL, BLEU: 57.8, chr-F: 0.717",
"### System Info:\n\n\n* hf\\_name: eng-gmq\n* source\\_languages: eng\n* target\\_languages: gmq\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gmq\n* short\\_pair: en-gmq\n* chrF2\\_score: 0.688\n* bleu: 52.8\n* brevity\\_penalty: 0.973\n* ref\\_len: 71881.0\n* src\\_name: English\n* tgt\\_name: North Germanic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: gmq\n* prefer\\_old: False\n* long\\_pair: eng-gmq\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
67,
316,
472
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #da #nb #sv #is #nn #fo #gmq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-gmq\n\n\n* source group: English\n* target group: North Germanic languages\n* OPUS readme: eng-gmq\n* model: transformer\n* source language(s): eng\n* target language(s): dan fao isl nno nob nob\\_Hebr non\\_Latn swe\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.7, chr-F: 0.724\ntestset: URL, BLEU: 9.2, chr-F: 0.322\ntestset: URL, BLEU: 23.8, chr-F: 0.506\ntestset: URL, BLEU: 52.8, chr-F: 0.688\ntestset: URL, BLEU: 0.7, chr-F: 0.196\ntestset: URL, BLEU: 50.3, chr-F: 0.678\ntestset: URL, BLEU: 57.8, chr-F: 0.717### System Info:\n\n\n* hf\\_name: eng-gmq\n* source\\_languages: eng\n* target\\_languages: gmq\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gmq\n* short\\_pair: en-gmq\n* chrF2\\_score: 0.688\n* bleu: 52.8\n* brevity\\_penalty: 0.973\n* ref\\_len: 71881.0\n* src\\_name: English\n* tgt\\_name: North Germanic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: gmq\n* prefer\\_old: False\n* long\\_pair: eng-gmq\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-gmw
* source group: English
* target group: West Germanic languages
* OPUS readme: [eng-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md)
* model: transformer
* source language(s): eng
* target language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engdeu.eng.deu | 21.4 | 0.518 |
| news-test2008-engdeu.eng.deu | 21.0 | 0.510 |
| newstest2009-engdeu.eng.deu | 20.4 | 0.513 |
| newstest2010-engdeu.eng.deu | 22.9 | 0.528 |
| newstest2011-engdeu.eng.deu | 20.5 | 0.508 |
| newstest2012-engdeu.eng.deu | 21.0 | 0.507 |
| newstest2013-engdeu.eng.deu | 24.7 | 0.533 |
| newstest2015-ende-engdeu.eng.deu | 28.2 | 0.568 |
| newstest2016-ende-engdeu.eng.deu | 33.3 | 0.605 |
| newstest2017-ende-engdeu.eng.deu | 26.5 | 0.559 |
| newstest2018-ende-engdeu.eng.deu | 39.9 | 0.649 |
| newstest2019-ende-engdeu.eng.deu | 35.9 | 0.616 |
| Tatoeba-test.eng-afr.eng.afr | 55.7 | 0.740 |
| Tatoeba-test.eng-ang.eng.ang | 6.5 | 0.164 |
| Tatoeba-test.eng-deu.eng.deu | 40.4 | 0.614 |
| Tatoeba-test.eng-enm.eng.enm | 2.3 | 0.254 |
| Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.248 |
| Tatoeba-test.eng-fry.eng.fry | 17.9 | 0.424 |
| Tatoeba-test.eng-gos.eng.gos | 2.2 | 0.309 |
| Tatoeba-test.eng-gsw.eng.gsw | 1.6 | 0.186 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.5 | 0.189 |
| Tatoeba-test.eng-ltz.eng.ltz | 20.2 | 0.383 |
| Tatoeba-test.eng.multi | 41.6 | 0.609 |
| Tatoeba-test.eng-nds.eng.nds | 18.9 | 0.437 |
| Tatoeba-test.eng-nld.eng.nld | 53.1 | 0.699 |
| Tatoeba-test.eng-pdc.eng.pdc | 7.7 | 0.262 |
| Tatoeba-test.eng-sco.eng.sco | 37.7 | 0.557 |
| Tatoeba-test.eng-stq.eng.stq | 5.9 | 0.380 |
| Tatoeba-test.eng-swg.eng.swg | 6.2 | 0.236 |
| Tatoeba-test.eng-yid.eng.yid | 6.8 | 0.296 |
### System Info:
- hf_name: eng-gmw
- source_languages: eng
- target_languages: gmw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-gmw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'eng'}
- tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-gmw/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: gmw
- short_pair: en-gmw
- chrF2_score: 0.609
- bleu: 41.6
- brevity_penalty: 0.9890000000000001
- ref_len: 74922.0
- src_name: English
- tgt_name: West Germanic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: gmw
- prefer_old: False
- long_pair: eng-gmw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "nl", "lb", "af", "de", "fy", "yi", "gmw"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-gmw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"nl",
"lb",
"af",
"de",
"fy",
"yi",
"gmw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"nl",
"lb",
"af",
"de",
"fy",
"yi",
"gmw"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #nl #lb #af #de #fy #yi #gmw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-gmw
* source group: English
* target group: West Germanic languages
* OPUS readme: eng-gmw
* model: transformer
* source language(s): eng
* target language(s): afr ang\_Latn deu enm\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.4, chr-F: 0.518
testset: URL, BLEU: 21.0, chr-F: 0.510
testset: URL, BLEU: 20.4, chr-F: 0.513
testset: URL, BLEU: 22.9, chr-F: 0.528
testset: URL, BLEU: 20.5, chr-F: 0.508
testset: URL, BLEU: 21.0, chr-F: 0.507
testset: URL, BLEU: 24.7, chr-F: 0.533
testset: URL, BLEU: 28.2, chr-F: 0.568
testset: URL, BLEU: 33.3, chr-F: 0.605
testset: URL, BLEU: 26.5, chr-F: 0.559
testset: URL, BLEU: 39.9, chr-F: 0.649
testset: URL, BLEU: 35.9, chr-F: 0.616
testset: URL, BLEU: 55.7, chr-F: 0.740
testset: URL, BLEU: 6.5, chr-F: 0.164
testset: URL, BLEU: 40.4, chr-F: 0.614
testset: URL, BLEU: 2.3, chr-F: 0.254
testset: URL, BLEU: 8.4, chr-F: 0.248
testset: URL, BLEU: 17.9, chr-F: 0.424
testset: URL, BLEU: 2.2, chr-F: 0.309
testset: URL, BLEU: 1.6, chr-F: 0.186
testset: URL, BLEU: 1.5, chr-F: 0.189
testset: URL, BLEU: 20.2, chr-F: 0.383
testset: URL, BLEU: 41.6, chr-F: 0.609
testset: URL, BLEU: 18.9, chr-F: 0.437
testset: URL, BLEU: 53.1, chr-F: 0.699
testset: URL, BLEU: 7.7, chr-F: 0.262
testset: URL, BLEU: 37.7, chr-F: 0.557
testset: URL, BLEU: 5.9, chr-F: 0.380
testset: URL, BLEU: 6.2, chr-F: 0.236
testset: URL, BLEU: 6.8, chr-F: 0.296
### System Info:
* hf\_name: eng-gmw
* source\_languages: eng
* target\_languages: gmw
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
* src\_constituents: {'eng'}
* tgt\_constituents: {'ksh', 'nld', 'eng', 'enm\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: gmw
* short\_pair: en-gmw
* chrF2\_score: 0.609
* bleu: 41.6
* brevity\_penalty: 0.9890000000000001
* ref\_len: 74922.0
* src\_name: English
* tgt\_name: West Germanic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: gmw
* prefer\_old: False
* long\_pair: eng-gmw
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-gmw\n\n\n* source group: English\n* target group: West Germanic languages\n* OPUS readme: eng-gmw\n* model: transformer\n* source language(s): eng\n* target language(s): afr ang\\_Latn deu enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.4, chr-F: 0.518\ntestset: URL, BLEU: 21.0, chr-F: 0.510\ntestset: URL, BLEU: 20.4, chr-F: 0.513\ntestset: URL, BLEU: 22.9, chr-F: 0.528\ntestset: URL, BLEU: 20.5, chr-F: 0.508\ntestset: URL, BLEU: 21.0, chr-F: 0.507\ntestset: URL, BLEU: 24.7, chr-F: 0.533\ntestset: URL, BLEU: 28.2, chr-F: 0.568\ntestset: URL, BLEU: 33.3, chr-F: 0.605\ntestset: URL, BLEU: 26.5, chr-F: 0.559\ntestset: URL, BLEU: 39.9, chr-F: 0.649\ntestset: URL, BLEU: 35.9, chr-F: 0.616\ntestset: URL, BLEU: 55.7, chr-F: 0.740\ntestset: URL, BLEU: 6.5, chr-F: 0.164\ntestset: URL, BLEU: 40.4, chr-F: 0.614\ntestset: URL, BLEU: 2.3, chr-F: 0.254\ntestset: URL, BLEU: 8.4, chr-F: 0.248\ntestset: URL, BLEU: 17.9, chr-F: 0.424\ntestset: URL, BLEU: 2.2, chr-F: 0.309\ntestset: URL, BLEU: 1.6, chr-F: 0.186\ntestset: URL, BLEU: 1.5, chr-F: 0.189\ntestset: URL, BLEU: 20.2, chr-F: 0.383\ntestset: URL, BLEU: 41.6, chr-F: 0.609\ntestset: URL, BLEU: 18.9, chr-F: 0.437\ntestset: URL, BLEU: 53.1, chr-F: 0.699\ntestset: URL, BLEU: 7.7, chr-F: 0.262\ntestset: URL, BLEU: 37.7, chr-F: 0.557\ntestset: URL, BLEU: 5.9, chr-F: 0.380\ntestset: URL, BLEU: 6.2, chr-F: 0.236\ntestset: URL, BLEU: 6.8, chr-F: 0.296",
"### System Info:\n\n\n* hf\\_name: eng-gmw\n* source\\_languages: eng\n* target\\_languages: gmw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gmw\n* short\\_pair: en-gmw\n* chrF2\\_score: 0.609\n* bleu: 41.6\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 74922.0\n* src\\_name: English\n* tgt\\_name: West Germanic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: gmw\n* prefer\\_old: False\n* long\\_pair: eng-gmw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #nl #lb #af #de #fy #yi #gmw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-gmw\n\n\n* source group: English\n* target group: West Germanic languages\n* OPUS readme: eng-gmw\n* model: transformer\n* source language(s): eng\n* target language(s): afr ang\\_Latn deu enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.4, chr-F: 0.518\ntestset: URL, BLEU: 21.0, chr-F: 0.510\ntestset: URL, BLEU: 20.4, chr-F: 0.513\ntestset: URL, BLEU: 22.9, chr-F: 0.528\ntestset: URL, BLEU: 20.5, chr-F: 0.508\ntestset: URL, BLEU: 21.0, chr-F: 0.507\ntestset: URL, BLEU: 24.7, chr-F: 0.533\ntestset: URL, BLEU: 28.2, chr-F: 0.568\ntestset: URL, BLEU: 33.3, chr-F: 0.605\ntestset: URL, BLEU: 26.5, chr-F: 0.559\ntestset: URL, BLEU: 39.9, chr-F: 0.649\ntestset: URL, BLEU: 35.9, chr-F: 0.616\ntestset: URL, BLEU: 55.7, chr-F: 0.740\ntestset: URL, BLEU: 6.5, chr-F: 0.164\ntestset: URL, BLEU: 40.4, chr-F: 0.614\ntestset: URL, BLEU: 2.3, chr-F: 0.254\ntestset: URL, BLEU: 8.4, chr-F: 0.248\ntestset: URL, BLEU: 17.9, chr-F: 0.424\ntestset: URL, BLEU: 2.2, chr-F: 0.309\ntestset: URL, BLEU: 1.6, chr-F: 0.186\ntestset: URL, BLEU: 1.5, chr-F: 0.189\ntestset: URL, BLEU: 20.2, chr-F: 0.383\ntestset: URL, BLEU: 41.6, chr-F: 0.609\ntestset: URL, BLEU: 18.9, chr-F: 0.437\ntestset: URL, BLEU: 53.1, chr-F: 0.699\ntestset: URL, BLEU: 7.7, chr-F: 0.262\ntestset: URL, BLEU: 37.7, chr-F: 0.557\ntestset: URL, BLEU: 5.9, chr-F: 0.380\ntestset: URL, BLEU: 6.2, chr-F: 0.236\ntestset: URL, BLEU: 6.8, chr-F: 0.296",
"### System Info:\n\n\n* hf\\_name: eng-gmw\n* source\\_languages: eng\n* target\\_languages: gmw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gmw\n* short\\_pair: en-gmw\n* chrF2\\_score: 0.609\n* bleu: 41.6\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 74922.0\n* src\\_name: English\n* tgt\\_name: West Germanic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: gmw\n* prefer\\_old: False\n* long\\_pair: eng-gmw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
65,
854,
525
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #nl #lb #af #de #fy #yi #gmw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-gmw\n\n\n* source group: English\n* target group: West Germanic languages\n* OPUS readme: eng-gmw\n* model: transformer\n* source language(s): eng\n* target language(s): afr ang\\_Latn deu enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.4, chr-F: 0.518\ntestset: URL, BLEU: 21.0, chr-F: 0.510\ntestset: URL, BLEU: 20.4, chr-F: 0.513\ntestset: URL, BLEU: 22.9, chr-F: 0.528\ntestset: URL, BLEU: 20.5, chr-F: 0.508\ntestset: URL, BLEU: 21.0, chr-F: 0.507\ntestset: URL, BLEU: 24.7, chr-F: 0.533\ntestset: URL, BLEU: 28.2, chr-F: 0.568\ntestset: URL, BLEU: 33.3, chr-F: 0.605\ntestset: URL, BLEU: 26.5, chr-F: 0.559\ntestset: URL, BLEU: 39.9, chr-F: 0.649\ntestset: URL, BLEU: 35.9, chr-F: 0.616\ntestset: URL, BLEU: 55.7, chr-F: 0.740\ntestset: URL, BLEU: 6.5, chr-F: 0.164\ntestset: URL, BLEU: 40.4, chr-F: 0.614\ntestset: URL, BLEU: 2.3, chr-F: 0.254\ntestset: URL, BLEU: 8.4, chr-F: 0.248\ntestset: URL, BLEU: 17.9, chr-F: 0.424\ntestset: URL, BLEU: 2.2, chr-F: 0.309\ntestset: URL, BLEU: 1.6, chr-F: 0.186\ntestset: URL, BLEU: 1.5, chr-F: 0.189\ntestset: URL, BLEU: 20.2, chr-F: 0.383\ntestset: URL, BLEU: 41.6, chr-F: 0.609\ntestset: URL, BLEU: 18.9, chr-F: 0.437\ntestset: URL, BLEU: 53.1, chr-F: 0.699\ntestset: URL, BLEU: 7.7, chr-F: 0.262\ntestset: URL, BLEU: 37.7, chr-F: 0.557\ntestset: URL, BLEU: 5.9, chr-F: 0.380\ntestset: URL, BLEU: 6.2, chr-F: 0.236\ntestset: URL, BLEU: 6.8, chr-F: 0.296### System Info:\n\n\n* hf\\_name: eng-gmw\n* source\\_languages: eng\n* target\\_languages: gmw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'nl', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: gmw\n* short\\_pair: en-gmw\n* chrF2\\_score: 0.609\n* bleu: 41.6\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 74922.0\n* src\\_name: English\n* tgt\\_name: West Germanic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: gmw\n* prefer\\_old: False\n* long\\_pair: eng-gmw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-grk
* source group: English
* target group: Greek languages
* OPUS readme: [eng-grk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md)
* model: transformer
* source language(s): eng
* target language(s): ell grc_Grek
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-ell.eng.ell | 53.8 | 0.723 |
| Tatoeba-test.eng-grc.eng.grc | 0.1 | 0.102 |
| Tatoeba-test.eng.multi | 45.6 | 0.677 |
### System Info:
- hf_name: eng-grk
- source_languages: eng
- target_languages: grk
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-grk/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'el', 'grk']
- src_constituents: {'eng'}
- tgt_constituents: {'grc_Grek', 'ell'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-grk/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: grk
- short_pair: en-grk
- chrF2_score: 0.677
- bleu: 45.6
- brevity_penalty: 1.0
- ref_len: 59951.0
- src_name: English
- tgt_name: Greek languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: grk
- prefer_old: False
- long_pair: eng-grk
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "el", "grk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-grk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"el",
"grk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"el",
"grk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #el #grk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-grk
* source group: English
* target group: Greek languages
* OPUS readme: eng-grk
* model: transformer
* source language(s): eng
* target language(s): ell grc\_Grek
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 53.8, chr-F: 0.723
testset: URL, BLEU: 0.1, chr-F: 0.102
testset: URL, BLEU: 45.6, chr-F: 0.677
### System Info:
* hf\_name: eng-grk
* source\_languages: eng
* target\_languages: grk
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'el', 'grk']
* src\_constituents: {'eng'}
* tgt\_constituents: {'grc\_Grek', 'ell'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: grk
* short\_pair: en-grk
* chrF2\_score: 0.677
* bleu: 45.6
* brevity\_penalty: 1.0
* ref\_len: 59951.0
* src\_name: English
* tgt\_name: Greek languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: grk
* prefer\_old: False
* long\_pair: eng-grk
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-grk\n\n\n* source group: English\n* target group: Greek languages\n* OPUS readme: eng-grk\n* model: transformer\n* source language(s): eng\n* target language(s): ell grc\\_Grek\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.8, chr-F: 0.723\ntestset: URL, BLEU: 0.1, chr-F: 0.102\ntestset: URL, BLEU: 45.6, chr-F: 0.677",
"### System Info:\n\n\n* hf\\_name: eng-grk\n* source\\_languages: eng\n* target\\_languages: grk\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'el', 'grk']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'grc\\_Grek', 'ell'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: grk\n* short\\_pair: en-grk\n* chrF2\\_score: 0.677\n* bleu: 45.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 59951.0\n* src\\_name: English\n* tgt\\_name: Greek languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: grk\n* prefer\\_old: False\n* long\\_pair: eng-grk\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #el #grk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-grk\n\n\n* source group: English\n* target group: Greek languages\n* OPUS readme: eng-grk\n* model: transformer\n* source language(s): eng\n* target language(s): ell grc\\_Grek\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.8, chr-F: 0.723\ntestset: URL, BLEU: 0.1, chr-F: 0.102\ntestset: URL, BLEU: 45.6, chr-F: 0.677",
"### System Info:\n\n\n* hf\\_name: eng-grk\n* source\\_languages: eng\n* target\\_languages: grk\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'el', 'grk']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'grc\\_Grek', 'ell'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: grk\n* short\\_pair: en-grk\n* chrF2\\_score: 0.677\n* bleu: 45.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 59951.0\n* src\\_name: English\n* tgt\\_name: Greek languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: grk\n* prefer\\_old: False\n* long\\_pair: eng-grk\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
54,
209,
413
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #el #grk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-grk\n\n\n* source group: English\n* target group: Greek languages\n* OPUS readme: eng-grk\n* model: transformer\n* source language(s): eng\n* target language(s): ell grc\\_Grek\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.8, chr-F: 0.723\ntestset: URL, BLEU: 0.1, chr-F: 0.102\ntestset: URL, BLEU: 45.6, chr-F: 0.677### System Info:\n\n\n* hf\\_name: eng-grk\n* source\\_languages: eng\n* target\\_languages: grk\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'el', 'grk']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'grc\\_Grek', 'ell'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: grk\n* short\\_pair: en-grk\n* chrF2\\_score: 0.677\n* bleu: 45.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 59951.0\n* src\\_name: English\n* tgt\\_name: Greek languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: grk\n* prefer\\_old: False\n* long\\_pair: eng-grk\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-guw
* source languages: en
* target languages: guw
* OPUS readme: [en-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-guw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.guw | 45.7 | 0.634 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-guw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"guw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #guw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-guw
* source languages: en
* target languages: guw
* OPUS readme: en-guw
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 45.7, chr-F: 0.634
| [
"### opus-mt-en-guw\n\n\n* source languages: en\n* target languages: guw\n* OPUS readme: en-guw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.7, chr-F: 0.634"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #guw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-guw\n\n\n* source languages: en\n* target languages: guw\n* OPUS readme: en-guw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.7, chr-F: 0.634"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #guw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-guw\n\n\n* source languages: en\n* target languages: guw\n* OPUS readme: en-guw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.7, chr-F: 0.634"
] |
translation | transformers |
### opus-mt-en-gv
* source languages: en
* target languages: gv
* OPUS readme: [en-gv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-gv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-gv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.en.gv | 70.1 | 0.885 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-gv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"gv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #gv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-gv
* source languages: en
* target languages: gv
* OPUS readme: en-gv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 70.1, chr-F: 0.885
| [
"### opus-mt-en-gv\n\n\n* source languages: en\n* target languages: gv\n* OPUS readme: en-gv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 70.1, chr-F: 0.885"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-gv\n\n\n* source languages: en\n* target languages: gv\n* OPUS readme: en-gv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 70.1, chr-F: 0.885"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #gv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-gv\n\n\n* source languages: en\n* target languages: gv\n* OPUS readme: en-gv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 70.1, chr-F: 0.885"
] |
translation | transformers |
### opus-mt-en-ha
* source languages: en
* target languages: ha
* OPUS readme: [en-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ha/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ha | 34.1 | 0.544 |
| Tatoeba.en.ha | 17.6 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ha | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ha
* source languages: en
* target languages: ha
* OPUS readme: en-ha
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.1, chr-F: 0.544
testset: URL, BLEU: 17.6, chr-F: 0.498
| [
"### opus-mt-en-ha\n\n\n* source languages: en\n* target languages: ha\n* OPUS readme: en-ha\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.1, chr-F: 0.544\ntestset: URL, BLEU: 17.6, chr-F: 0.498"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ha\n\n\n* source languages: en\n* target languages: ha\n* OPUS readme: en-ha\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.1, chr-F: 0.544\ntestset: URL, BLEU: 17.6, chr-F: 0.498"
] | [
51,
129
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ha\n\n\n* source languages: en\n* target languages: ha\n* OPUS readme: en-ha\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.1, chr-F: 0.544\ntestset: URL, BLEU: 17.6, chr-F: 0.498"
] |
translation | transformers |
### opus-mt-en-he
* source languages: en
* target languages: he
* OPUS readme: [en-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-he/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-he/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.he | 40.1 | 0.609 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-he | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-he
* source languages: en
* target languages: he
* OPUS readme: en-he
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.1, chr-F: 0.609
| [
"### opus-mt-en-he\n\n\n* source languages: en\n* target languages: he\n* OPUS readme: en-he\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.609"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-he\n\n\n* source languages: en\n* target languages: he\n* OPUS readme: en-he\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.609"
] | [
53,
106
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-he\n\n\n* source languages: en\n* target languages: he\n* OPUS readme: en-he\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.609"
] |
translation | transformers |
### eng-hin
* source group: English
* target group: Hindi
* OPUS readme: [eng-hin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hin/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): hin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014.eng.hin | 6.9 | 0.296 |
| newstest2014-hien.eng.hin | 9.9 | 0.323 |
| Tatoeba-test.eng.hin | 16.1 | 0.447 |
### System Info:
- hf_name: eng-hin
- source_languages: eng
- target_languages: hin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'hi']
- src_constituents: {'eng'}
- tgt_constituents: {'hin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hin/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: hin
- short_pair: en-hi
- chrF2_score: 0.447
- bleu: 16.1
- brevity_penalty: 1.0
- ref_len: 32904.0
- src_name: English
- tgt_name: Hindi
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: hi
- prefer_old: False
- long_pair: eng-hin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "hi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-hi | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"hi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"hi"
] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #hi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-hin
* source group: English
* target group: Hindi
* OPUS readme: eng-hin
* model: transformer-align
* source language(s): eng
* target language(s): hin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 6.9, chr-F: 0.296
testset: URL, BLEU: 9.9, chr-F: 0.323
testset: URL, BLEU: 16.1, chr-F: 0.447
### System Info:
* hf\_name: eng-hin
* source\_languages: eng
* target\_languages: hin
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'hi']
* src\_constituents: {'eng'}
* tgt\_constituents: {'hin'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: hin
* short\_pair: en-hi
* chrF2\_score: 0.447
* bleu: 16.1
* brevity\_penalty: 1.0
* ref\_len: 32904.0
* src\_name: English
* tgt\_name: Hindi
* train\_date: 2020-06-17
* src\_alpha2: en
* tgt\_alpha2: hi
* prefer\_old: False
* long\_pair: eng-hin
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-hin\n\n\n* source group: English\n* target group: Hindi\n* OPUS readme: eng-hin\n* model: transformer-align\n* source language(s): eng\n* target language(s): hin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.9, chr-F: 0.296\ntestset: URL, BLEU: 9.9, chr-F: 0.323\ntestset: URL, BLEU: 16.1, chr-F: 0.447",
"### System Info:\n\n\n* hf\\_name: eng-hin\n* source\\_languages: eng\n* target\\_languages: hin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'hi']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'hin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: hin\n* short\\_pair: en-hi\n* chrF2\\_score: 0.447\n* bleu: 16.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 32904.0\n* src\\_name: English\n* tgt\\_name: Hindi\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: hi\n* prefer\\_old: False\n* long\\_pair: eng-hin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #hi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-hin\n\n\n* source group: English\n* target group: Hindi\n* OPUS readme: eng-hin\n* model: transformer-align\n* source language(s): eng\n* target language(s): hin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.9, chr-F: 0.296\ntestset: URL, BLEU: 9.9, chr-F: 0.323\ntestset: URL, BLEU: 16.1, chr-F: 0.447",
"### System Info:\n\n\n* hf\\_name: eng-hin\n* source\\_languages: eng\n* target\\_languages: hin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'hi']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'hin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: hin\n* short\\_pair: en-hi\n* chrF2\\_score: 0.447\n* bleu: 16.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 32904.0\n* src\\_name: English\n* tgt\\_name: Hindi\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: hi\n* prefer\\_old: False\n* long\\_pair: eng-hin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
53,
178,
396
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #hi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-hin\n\n\n* source group: English\n* target group: Hindi\n* OPUS readme: eng-hin\n* model: transformer-align\n* source language(s): eng\n* target language(s): hin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.9, chr-F: 0.296\ntestset: URL, BLEU: 9.9, chr-F: 0.323\ntestset: URL, BLEU: 16.1, chr-F: 0.447### System Info:\n\n\n* hf\\_name: eng-hin\n* source\\_languages: eng\n* target\\_languages: hin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'hi']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'hin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: hin\n* short\\_pair: en-hi\n* chrF2\\_score: 0.447\n* bleu: 16.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 32904.0\n* src\\_name: English\n* tgt\\_name: Hindi\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: hi\n* prefer\\_old: False\n* long\\_pair: eng-hin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-hil
* source languages: en
* target languages: hil
* OPUS readme: [en-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hil/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.hil | 49.4 | 0.696 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-hil | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"hil",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #hil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-hil
* source languages: en
* target languages: hil
* OPUS readme: en-hil
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.4, chr-F: 0.696
| [
"### opus-mt-en-hil\n\n\n* source languages: en\n* target languages: hil\n* OPUS readme: en-hil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.4, chr-F: 0.696"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #hil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-hil\n\n\n* source languages: en\n* target languages: hil\n* OPUS readme: en-hil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.4, chr-F: 0.696"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #hil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-hil\n\n\n* source languages: en\n* target languages: hil\n* OPUS readme: en-hil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.4, chr-F: 0.696"
] |
translation | transformers |
### opus-mt-en-ho
* source languages: en
* target languages: ho
* OPUS readme: [en-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ho/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ho/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ho | 33.9 | 0.563 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ho | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ho",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ho #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ho
* source languages: en
* target languages: ho
* OPUS readme: en-ho
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.9, chr-F: 0.563
| [
"### opus-mt-en-ho\n\n\n* source languages: en\n* target languages: ho\n* OPUS readme: en-ho\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.9, chr-F: 0.563"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ho #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ho\n\n\n* source languages: en\n* target languages: ho\n* OPUS readme: en-ho\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.9, chr-F: 0.563"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ho #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ho\n\n\n* source languages: en\n* target languages: ho\n* OPUS readme: en-ho\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.9, chr-F: 0.563"
] |
translation | transformers |
### opus-mt-en-ht
* source languages: en
* target languages: ht
* OPUS readme: [en-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ht/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ht | 38.3 | 0.545 |
| Tatoeba.en.ht | 45.2 | 0.592 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ht | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ht",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ht #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ht
* source languages: en
* target languages: ht
* OPUS readme: en-ht
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.3, chr-F: 0.545
testset: URL, BLEU: 45.2, chr-F: 0.592
| [
"### opus-mt-en-ht\n\n\n* source languages: en\n* target languages: ht\n* OPUS readme: en-ht\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.3, chr-F: 0.545\ntestset: URL, BLEU: 45.2, chr-F: 0.592"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ht #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ht\n\n\n* source languages: en\n* target languages: ht\n* OPUS readme: en-ht\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.3, chr-F: 0.545\ntestset: URL, BLEU: 45.2, chr-F: 0.592"
] | [
52,
132
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ht #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ht\n\n\n* source languages: en\n* target languages: ht\n* OPUS readme: en-ht\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.3, chr-F: 0.545\ntestset: URL, BLEU: 45.2, chr-F: 0.592"
] |
translation | transformers |
### opus-mt-en-hu
* source languages: en
* target languages: hu
* OPUS readme: [en-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-hu/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.hu | 40.1 | 0.628 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-hu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"hu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-hu
* source languages: en
* target languages: hu
* OPUS readme: en-hu
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.1, chr-F: 0.628
| [
"### opus-mt-en-hu\n\n\n* source languages: en\n* target languages: hu\n* OPUS readme: en-hu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.628"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-hu\n\n\n* source languages: en\n* target languages: hu\n* OPUS readme: en-hu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.628"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-hu\n\n\n* source languages: en\n* target languages: hu\n* OPUS readme: en-hu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.628"
] |
translation | transformers |
### eng-hye
* source group: English
* target group: Armenian
* OPUS readme: [eng-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): hye
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.hye | 16.6 | 0.404 |
### System Info:
- hf_name: eng-hye
- source_languages: eng
- target_languages: hye
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'hy']
- src_constituents: {'eng'}
- tgt_constituents: {'hye', 'hye_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt
- src_alpha3: eng
- tgt_alpha3: hye
- short_pair: en-hy
- chrF2_score: 0.40399999999999997
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 5115.0
- src_name: English
- tgt_name: Armenian
- train_date: 2020-06-16
- src_alpha2: en
- tgt_alpha2: hy
- prefer_old: False
- long_pair: eng-hye
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "hy"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-hy | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"hy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"hy"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #hy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-hye
* source group: English
* target group: Armenian
* OPUS readme: eng-hye
* model: transformer-align
* source language(s): eng
* target language(s): hye
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 16.6, chr-F: 0.404
### System Info:
* hf\_name: eng-hye
* source\_languages: eng
* target\_languages: hye
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'hy']
* src\_constituents: {'eng'}
* tgt\_constituents: {'hye', 'hye\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: hye
* short\_pair: en-hy
* chrF2\_score: 0.40399999999999997
* bleu: 16.6
* brevity\_penalty: 1.0
* ref\_len: 5115.0
* src\_name: English
* tgt\_name: Armenian
* train\_date: 2020-06-16
* src\_alpha2: en
* tgt\_alpha2: hy
* prefer\_old: False
* long\_pair: eng-hye
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-hye\n\n\n* source group: English\n* target group: Armenian\n* OPUS readme: eng-hye\n* model: transformer-align\n* source language(s): eng\n* target language(s): hye\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.6, chr-F: 0.404",
"### System Info:\n\n\n* hf\\_name: eng-hye\n* source\\_languages: eng\n* target\\_languages: hye\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'hy']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'hye', 'hye\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: hye\n* short\\_pair: en-hy\n* chrF2\\_score: 0.40399999999999997\n* bleu: 16.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 5115.0\n* src\\_name: English\n* tgt\\_name: Armenian\n* train\\_date: 2020-06-16\n* src\\_alpha2: en\n* tgt\\_alpha2: hy\n* prefer\\_old: False\n* long\\_pair: eng-hye\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #hy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-hye\n\n\n* source group: English\n* target group: Armenian\n* OPUS readme: eng-hye\n* model: transformer-align\n* source language(s): eng\n* target language(s): hye\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.6, chr-F: 0.404",
"### System Info:\n\n\n* hf\\_name: eng-hye\n* source\\_languages: eng\n* target\\_languages: hye\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'hy']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'hye', 'hye\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: hye\n* short\\_pair: en-hy\n* chrF2\\_score: 0.40399999999999997\n* bleu: 16.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 5115.0\n* src\\_name: English\n* tgt\\_name: Armenian\n* train\\_date: 2020-06-16\n* src\\_alpha2: en\n* tgt\\_alpha2: hy\n* prefer\\_old: False\n* long\\_pair: eng-hye\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
133,
421
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #hy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-hye\n\n\n* source group: English\n* target group: Armenian\n* OPUS readme: eng-hye\n* model: transformer-align\n* source language(s): eng\n* target language(s): hye\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.6, chr-F: 0.404### System Info:\n\n\n* hf\\_name: eng-hye\n* source\\_languages: eng\n* target\\_languages: hye\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'hy']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'hye', 'hye\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: hye\n* short\\_pair: en-hy\n* chrF2\\_score: 0.40399999999999997\n* bleu: 16.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 5115.0\n* src\\_name: English\n* tgt\\_name: Armenian\n* train\\_date: 2020-06-16\n* src\\_alpha2: en\n* tgt\\_alpha2: hy\n* prefer\\_old: False\n* long\\_pair: eng-hye\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-id
* source languages: en
* target languages: id
* OPUS readme: [en-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-id/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.id | 38.3 | 0.636 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-id | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"id",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #id #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-id
* source languages: en
* target languages: id
* OPUS readme: en-id
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.3, chr-F: 0.636
| [
"### opus-mt-en-id\n\n\n* source languages: en\n* target languages: id\n* OPUS readme: en-id\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.3, chr-F: 0.636"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #id #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-id\n\n\n* source languages: en\n* target languages: id\n* OPUS readme: en-id\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.3, chr-F: 0.636"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #id #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-id\n\n\n* source languages: en\n* target languages: id\n* OPUS readme: en-id\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.3, chr-F: 0.636"
] |
translation | transformers |
### opus-mt-en-ig
* source languages: en
* target languages: ig
* OPUS readme: [en-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ig/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ig | 39.5 | 0.546 |
| Tatoeba.en.ig | 3.8 | 0.297 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ig | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ig",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ig
* source languages: en
* target languages: ig
* OPUS readme: en-ig
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.5, chr-F: 0.546
testset: URL, BLEU: 3.8, chr-F: 0.297
| [
"### opus-mt-en-ig\n\n\n* source languages: en\n* target languages: ig\n* OPUS readme: en-ig\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.5, chr-F: 0.546\ntestset: URL, BLEU: 3.8, chr-F: 0.297"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ig\n\n\n* source languages: en\n* target languages: ig\n* OPUS readme: en-ig\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.5, chr-F: 0.546\ntestset: URL, BLEU: 3.8, chr-F: 0.297"
] | [
52,
131
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ig\n\n\n* source languages: en\n* target languages: ig\n* OPUS readme: en-ig\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.5, chr-F: 0.546\ntestset: URL, BLEU: 3.8, chr-F: 0.297"
] |
translation | transformers |
### eng-iir
* source group: English
* target group: Indo-Iranian languages
* OPUS readme: [eng-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md)
* model: transformer
* source language(s): eng
* target language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-enghin.eng.hin | 6.7 | 0.326 |
| newsdev2019-engu-engguj.eng.guj | 6.0 | 0.283 |
| newstest2014-hien-enghin.eng.hin | 10.4 | 0.353 |
| newstest2019-engu-engguj.eng.guj | 6.6 | 0.282 |
| Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.249 |
| Tatoeba-test.eng-awa.eng.awa | 0.4 | 0.122 |
| Tatoeba-test.eng-ben.eng.ben | 15.3 | 0.459 |
| Tatoeba-test.eng-bho.eng.bho | 3.7 | 0.161 |
| Tatoeba-test.eng-fas.eng.fas | 3.4 | 0.227 |
| Tatoeba-test.eng-guj.eng.guj | 18.5 | 0.365 |
| Tatoeba-test.eng-hif.eng.hif | 1.0 | 0.064 |
| Tatoeba-test.eng-hin.eng.hin | 17.0 | 0.461 |
| Tatoeba-test.eng-jdt.eng.jdt | 3.9 | 0.122 |
| Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.059 |
| Tatoeba-test.eng-kur.eng.kur | 4.0 | 0.125 |
| Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.008 |
| Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.445 |
| Tatoeba-test.eng-mar.eng.mar | 20.7 | 0.473 |
| Tatoeba-test.eng.multi | 13.7 | 0.392 |
| Tatoeba-test.eng-nep.eng.nep | 0.6 | 0.060 |
| Tatoeba-test.eng-ori.eng.ori | 2.4 | 0.193 |
| Tatoeba-test.eng-oss.eng.oss | 2.1 | 0.174 |
| Tatoeba-test.eng-pan.eng.pan | 9.7 | 0.355 |
| Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.126 |
| Tatoeba-test.eng-rom.eng.rom | 1.3 | 0.230 |
| Tatoeba-test.eng-san.eng.san | 1.3 | 0.101 |
| Tatoeba-test.eng-sin.eng.sin | 11.7 | 0.384 |
| Tatoeba-test.eng-snd.eng.snd | 2.8 | 0.180 |
| Tatoeba-test.eng-tgk.eng.tgk | 8.1 | 0.353 |
| Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.015 |
| Tatoeba-test.eng-urd.eng.urd | 12.3 | 0.409 |
| Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.025 |
### System Info:
- hf_name: eng-iir
- source_languages: eng
- target_languages: iir
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-iir/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']
- src_constituents: {'eng'}
- tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-iir/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: iir
- short_pair: en-iir
- chrF2_score: 0.392
- bleu: 13.7
- brevity_penalty: 1.0
- ref_len: 63351.0
- src_name: English
- tgt_name: Indo-Iranian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: iir
- prefer_old: False
- long_pair: eng-iir
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "bn", "or", "gu", "mr", "ur", "hi", "ps", "os", "as", "si", "iir"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-iir | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #bn #or #gu #mr #ur #hi #ps #os #as #si #iir #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-iir
* source group: English
* target group: Indo-Iranian languages
* OPUS readme: eng-iir
* model: transformer
* source language(s): eng
* target language(s): asm awa ben bho gom guj hif\_Latn hin jdt\_Cyrl kur\_Arab kur\_Latn mai mar npi ori oss pan\_Guru pes pes\_Latn pes\_Thaa pnb pus rom san\_Deva sin snd\_Arab tgk\_Cyrl tly\_Latn urd zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 6.7, chr-F: 0.326
testset: URL, BLEU: 6.0, chr-F: 0.283
testset: URL, BLEU: 10.4, chr-F: 0.353
testset: URL, BLEU: 6.6, chr-F: 0.282
testset: URL, BLEU: 2.7, chr-F: 0.249
testset: URL, BLEU: 0.4, chr-F: 0.122
testset: URL, BLEU: 15.3, chr-F: 0.459
testset: URL, BLEU: 3.7, chr-F: 0.161
testset: URL, BLEU: 3.4, chr-F: 0.227
testset: URL, BLEU: 18.5, chr-F: 0.365
testset: URL, BLEU: 1.0, chr-F: 0.064
testset: URL, BLEU: 17.0, chr-F: 0.461
testset: URL, BLEU: 3.9, chr-F: 0.122
testset: URL, BLEU: 5.5, chr-F: 0.059
testset: URL, BLEU: 4.0, chr-F: 0.125
testset: URL, BLEU: 0.3, chr-F: 0.008
testset: URL, BLEU: 9.3, chr-F: 0.445
testset: URL, BLEU: 20.7, chr-F: 0.473
testset: URL, BLEU: 13.7, chr-F: 0.392
testset: URL, BLEU: 0.6, chr-F: 0.060
testset: URL, BLEU: 2.4, chr-F: 0.193
testset: URL, BLEU: 2.1, chr-F: 0.174
testset: URL, BLEU: 9.7, chr-F: 0.355
testset: URL, BLEU: 1.0, chr-F: 0.126
testset: URL, BLEU: 1.3, chr-F: 0.230
testset: URL, BLEU: 1.3, chr-F: 0.101
testset: URL, BLEU: 11.7, chr-F: 0.384
testset: URL, BLEU: 2.8, chr-F: 0.180
testset: URL, BLEU: 8.1, chr-F: 0.353
testset: URL, BLEU: 0.5, chr-F: 0.015
testset: URL, BLEU: 12.3, chr-F: 0.409
testset: URL, BLEU: 0.5, chr-F: 0.025
### System Info:
* hf\_name: eng-iir
* source\_languages: eng
* target\_languages: iir
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']
* src\_constituents: {'eng'}
* tgt\_constituents: {'pnb', 'gom', 'ben', 'hif\_Latn', 'ori', 'guj', 'pan\_Guru', 'snd\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\_Arab', 'tgk\_Cyrl', 'hin', 'kur\_Latn', 'pes\_Thaa', 'pus', 'san\_Deva', 'oss', 'tly\_Latn', 'jdt\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\_Latn', 'awa', 'sin'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: iir
* short\_pair: en-iir
* chrF2\_score: 0.392
* bleu: 13.7
* brevity\_penalty: 1.0
* ref\_len: 63351.0
* src\_name: English
* tgt\_name: Indo-Iranian languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: iir
* prefer\_old: False
* long\_pair: eng-iir
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-iir\n\n\n* source group: English\n* target group: Indo-Iranian languages\n* OPUS readme: eng-iir\n* model: transformer\n* source language(s): eng\n* target language(s): asm awa ben bho gom guj hif\\_Latn hin jdt\\_Cyrl kur\\_Arab kur\\_Latn mai mar npi ori oss pan\\_Guru pes pes\\_Latn pes\\_Thaa pnb pus rom san\\_Deva sin snd\\_Arab tgk\\_Cyrl tly\\_Latn urd zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.7, chr-F: 0.326\ntestset: URL, BLEU: 6.0, chr-F: 0.283\ntestset: URL, BLEU: 10.4, chr-F: 0.353\ntestset: URL, BLEU: 6.6, chr-F: 0.282\ntestset: URL, BLEU: 2.7, chr-F: 0.249\ntestset: URL, BLEU: 0.4, chr-F: 0.122\ntestset: URL, BLEU: 15.3, chr-F: 0.459\ntestset: URL, BLEU: 3.7, chr-F: 0.161\ntestset: URL, BLEU: 3.4, chr-F: 0.227\ntestset: URL, BLEU: 18.5, chr-F: 0.365\ntestset: URL, BLEU: 1.0, chr-F: 0.064\ntestset: URL, BLEU: 17.0, chr-F: 0.461\ntestset: URL, BLEU: 3.9, chr-F: 0.122\ntestset: URL, BLEU: 5.5, chr-F: 0.059\ntestset: URL, BLEU: 4.0, chr-F: 0.125\ntestset: URL, BLEU: 0.3, chr-F: 0.008\ntestset: URL, BLEU: 9.3, chr-F: 0.445\ntestset: URL, BLEU: 20.7, chr-F: 0.473\ntestset: URL, BLEU: 13.7, chr-F: 0.392\ntestset: URL, BLEU: 0.6, chr-F: 0.060\ntestset: URL, BLEU: 2.4, chr-F: 0.193\ntestset: URL, BLEU: 2.1, chr-F: 0.174\ntestset: URL, BLEU: 9.7, chr-F: 0.355\ntestset: URL, BLEU: 1.0, chr-F: 0.126\ntestset: URL, BLEU: 1.3, chr-F: 0.230\ntestset: URL, BLEU: 1.3, chr-F: 0.101\ntestset: URL, BLEU: 11.7, chr-F: 0.384\ntestset: URL, BLEU: 2.8, chr-F: 0.180\ntestset: URL, BLEU: 8.1, chr-F: 0.353\ntestset: URL, BLEU: 0.5, chr-F: 0.015\ntestset: URL, BLEU: 12.3, chr-F: 0.409\ntestset: URL, BLEU: 0.5, chr-F: 0.025",
"### System Info:\n\n\n* hf\\_name: eng-iir\n* source\\_languages: eng\n* target\\_languages: iir\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: iir\n* short\\_pair: en-iir\n* chrF2\\_score: 0.392\n* bleu: 13.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 63351.0\n* src\\_name: English\n* tgt\\_name: Indo-Iranian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: iir\n* prefer\\_old: False\n* long\\_pair: eng-iir\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bn #or #gu #mr #ur #hi #ps #os #as #si #iir #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-iir\n\n\n* source group: English\n* target group: Indo-Iranian languages\n* OPUS readme: eng-iir\n* model: transformer\n* source language(s): eng\n* target language(s): asm awa ben bho gom guj hif\\_Latn hin jdt\\_Cyrl kur\\_Arab kur\\_Latn mai mar npi ori oss pan\\_Guru pes pes\\_Latn pes\\_Thaa pnb pus rom san\\_Deva sin snd\\_Arab tgk\\_Cyrl tly\\_Latn urd zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.7, chr-F: 0.326\ntestset: URL, BLEU: 6.0, chr-F: 0.283\ntestset: URL, BLEU: 10.4, chr-F: 0.353\ntestset: URL, BLEU: 6.6, chr-F: 0.282\ntestset: URL, BLEU: 2.7, chr-F: 0.249\ntestset: URL, BLEU: 0.4, chr-F: 0.122\ntestset: URL, BLEU: 15.3, chr-F: 0.459\ntestset: URL, BLEU: 3.7, chr-F: 0.161\ntestset: URL, BLEU: 3.4, chr-F: 0.227\ntestset: URL, BLEU: 18.5, chr-F: 0.365\ntestset: URL, BLEU: 1.0, chr-F: 0.064\ntestset: URL, BLEU: 17.0, chr-F: 0.461\ntestset: URL, BLEU: 3.9, chr-F: 0.122\ntestset: URL, BLEU: 5.5, chr-F: 0.059\ntestset: URL, BLEU: 4.0, chr-F: 0.125\ntestset: URL, BLEU: 0.3, chr-F: 0.008\ntestset: URL, BLEU: 9.3, chr-F: 0.445\ntestset: URL, BLEU: 20.7, chr-F: 0.473\ntestset: URL, BLEU: 13.7, chr-F: 0.392\ntestset: URL, BLEU: 0.6, chr-F: 0.060\ntestset: URL, BLEU: 2.4, chr-F: 0.193\ntestset: URL, BLEU: 2.1, chr-F: 0.174\ntestset: URL, BLEU: 9.7, chr-F: 0.355\ntestset: URL, BLEU: 1.0, chr-F: 0.126\ntestset: URL, BLEU: 1.3, chr-F: 0.230\ntestset: URL, BLEU: 1.3, chr-F: 0.101\ntestset: URL, BLEU: 11.7, chr-F: 0.384\ntestset: URL, BLEU: 2.8, chr-F: 0.180\ntestset: URL, BLEU: 8.1, chr-F: 0.353\ntestset: URL, BLEU: 0.5, chr-F: 0.015\ntestset: URL, BLEU: 12.3, chr-F: 0.409\ntestset: URL, BLEU: 0.5, chr-F: 0.025",
"### System Info:\n\n\n* hf\\_name: eng-iir\n* source\\_languages: eng\n* target\\_languages: iir\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: iir\n* short\\_pair: en-iir\n* chrF2\\_score: 0.392\n* bleu: 13.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 63351.0\n* src\\_name: English\n* tgt\\_name: Indo-Iranian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: iir\n* prefer\\_old: False\n* long\\_pair: eng-iir\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
72,
953,
626
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bn #or #gu #mr #ur #hi #ps #os #as #si #iir #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-iir\n\n\n* source group: English\n* target group: Indo-Iranian languages\n* OPUS readme: eng-iir\n* model: transformer\n* source language(s): eng\n* target language(s): asm awa ben bho gom guj hif\\_Latn hin jdt\\_Cyrl kur\\_Arab kur\\_Latn mai mar npi ori oss pan\\_Guru pes pes\\_Latn pes\\_Thaa pnb pus rom san\\_Deva sin snd\\_Arab tgk\\_Cyrl tly\\_Latn urd zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.7, chr-F: 0.326\ntestset: URL, BLEU: 6.0, chr-F: 0.283\ntestset: URL, BLEU: 10.4, chr-F: 0.353\ntestset: URL, BLEU: 6.6, chr-F: 0.282\ntestset: URL, BLEU: 2.7, chr-F: 0.249\ntestset: URL, BLEU: 0.4, chr-F: 0.122\ntestset: URL, BLEU: 15.3, chr-F: 0.459\ntestset: URL, BLEU: 3.7, chr-F: 0.161\ntestset: URL, BLEU: 3.4, chr-F: 0.227\ntestset: URL, BLEU: 18.5, chr-F: 0.365\ntestset: URL, BLEU: 1.0, chr-F: 0.064\ntestset: URL, BLEU: 17.0, chr-F: 0.461\ntestset: URL, BLEU: 3.9, chr-F: 0.122\ntestset: URL, BLEU: 5.5, chr-F: 0.059\ntestset: URL, BLEU: 4.0, chr-F: 0.125\ntestset: URL, BLEU: 0.3, chr-F: 0.008\ntestset: URL, BLEU: 9.3, chr-F: 0.445\ntestset: URL, BLEU: 20.7, chr-F: 0.473\ntestset: URL, BLEU: 13.7, chr-F: 0.392\ntestset: URL, BLEU: 0.6, chr-F: 0.060\ntestset: URL, BLEU: 2.4, chr-F: 0.193\ntestset: URL, BLEU: 2.1, chr-F: 0.174\ntestset: URL, BLEU: 9.7, chr-F: 0.355\ntestset: URL, BLEU: 1.0, chr-F: 0.126\ntestset: URL, BLEU: 1.3, chr-F: 0.230\ntestset: URL, BLEU: 1.3, chr-F: 0.101\ntestset: URL, BLEU: 11.7, chr-F: 0.384\ntestset: URL, BLEU: 2.8, chr-F: 0.180\ntestset: URL, BLEU: 8.1, chr-F: 0.353\ntestset: URL, BLEU: 0.5, chr-F: 0.015\ntestset: URL, BLEU: 12.3, chr-F: 0.409\ntestset: URL, BLEU: 0.5, chr-F: 0.025### System Info:\n\n\n* hf\\_name: eng-iir\n* source\\_languages: eng\n* target\\_languages: iir\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: iir\n* short\\_pair: en-iir\n* chrF2\\_score: 0.392\n* bleu: 13.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 63351.0\n* src\\_name: English\n* tgt\\_name: Indo-Iranian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: iir\n* prefer\\_old: False\n* long\\_pair: eng-iir\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-ilo
* source languages: en
* target languages: ilo
* OPUS readme: [en-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ilo/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ilo | 33.2 | 0.584 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ilo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ilo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ilo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ilo
* source languages: en
* target languages: ilo
* OPUS readme: en-ilo
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.2, chr-F: 0.584
| [
"### opus-mt-en-ilo\n\n\n* source languages: en\n* target languages: ilo\n* OPUS readme: en-ilo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.2, chr-F: 0.584"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ilo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ilo\n\n\n* source languages: en\n* target languages: ilo\n* OPUS readme: en-ilo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.2, chr-F: 0.584"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ilo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ilo\n\n\n* source languages: en\n* target languages: ilo\n* OPUS readme: en-ilo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.2, chr-F: 0.584"
] |
translation | transformers |
### eng-inc
* source group: English
* target group: Indic languages
* OPUS readme: [eng-inc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-inc/README.md)
* model: transformer
* source language(s): eng
* target language(s): asm awa ben bho gom guj hif_Latn hin mai mar npi ori pan_Guru pnb rom san_Deva sin snd_Arab urd
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-enghin.eng.hin | 8.2 | 0.342 |
| newsdev2019-engu-engguj.eng.guj | 6.5 | 0.293 |
| newstest2014-hien-enghin.eng.hin | 11.4 | 0.364 |
| newstest2019-engu-engguj.eng.guj | 7.2 | 0.296 |
| Tatoeba-test.eng-asm.eng.asm | 2.7 | 0.277 |
| Tatoeba-test.eng-awa.eng.awa | 0.5 | 0.132 |
| Tatoeba-test.eng-ben.eng.ben | 16.7 | 0.470 |
| Tatoeba-test.eng-bho.eng.bho | 4.3 | 0.227 |
| Tatoeba-test.eng-guj.eng.guj | 17.5 | 0.373 |
| Tatoeba-test.eng-hif.eng.hif | 0.6 | 0.028 |
| Tatoeba-test.eng-hin.eng.hin | 17.7 | 0.469 |
| Tatoeba-test.eng-kok.eng.kok | 1.7 | 0.000 |
| Tatoeba-test.eng-lah.eng.lah | 0.3 | 0.028 |
| Tatoeba-test.eng-mai.eng.mai | 15.6 | 0.429 |
| Tatoeba-test.eng-mar.eng.mar | 21.3 | 0.477 |
| Tatoeba-test.eng.multi | 17.3 | 0.448 |
| Tatoeba-test.eng-nep.eng.nep | 0.8 | 0.081 |
| Tatoeba-test.eng-ori.eng.ori | 2.2 | 0.208 |
| Tatoeba-test.eng-pan.eng.pan | 8.0 | 0.347 |
| Tatoeba-test.eng-rom.eng.rom | 0.4 | 0.197 |
| Tatoeba-test.eng-san.eng.san | 0.5 | 0.108 |
| Tatoeba-test.eng-sin.eng.sin | 9.1 | 0.364 |
| Tatoeba-test.eng-snd.eng.snd | 4.4 | 0.284 |
| Tatoeba-test.eng-urd.eng.urd | 13.3 | 0.423 |
### System Info:
- hf_name: eng-inc
- source_languages: eng
- target_languages: inc
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-inc/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']
- src_constituents: {'eng'}
- tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-inc/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: inc
- short_pair: en-inc
- chrF2_score: 0.44799999999999995
- bleu: 17.3
- brevity_penalty: 1.0
- ref_len: 59917.0
- src_name: English
- tgt_name: Indic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: inc
- prefer_old: False
- long_pair: eng-inc
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "bn", "or", "gu", "mr", "ur", "hi", "as", "si", "inc"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-inc | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"as",
"si",
"inc",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"as",
"si",
"inc"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #bn #or #gu #mr #ur #hi #as #si #inc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-inc
* source group: English
* target group: Indic languages
* OPUS readme: eng-inc
* model: transformer
* source language(s): eng
* target language(s): asm awa ben bho gom guj hif\_Latn hin mai mar npi ori pan\_Guru pnb rom san\_Deva sin snd\_Arab urd
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 8.2, chr-F: 0.342
testset: URL, BLEU: 6.5, chr-F: 0.293
testset: URL, BLEU: 11.4, chr-F: 0.364
testset: URL, BLEU: 7.2, chr-F: 0.296
testset: URL, BLEU: 2.7, chr-F: 0.277
testset: URL, BLEU: 0.5, chr-F: 0.132
testset: URL, BLEU: 16.7, chr-F: 0.470
testset: URL, BLEU: 4.3, chr-F: 0.227
testset: URL, BLEU: 17.5, chr-F: 0.373
testset: URL, BLEU: 0.6, chr-F: 0.028
testset: URL, BLEU: 17.7, chr-F: 0.469
testset: URL, BLEU: 1.7, chr-F: 0.000
testset: URL, BLEU: 0.3, chr-F: 0.028
testset: URL, BLEU: 15.6, chr-F: 0.429
testset: URL, BLEU: 21.3, chr-F: 0.477
testset: URL, BLEU: 17.3, chr-F: 0.448
testset: URL, BLEU: 0.8, chr-F: 0.081
testset: URL, BLEU: 2.2, chr-F: 0.208
testset: URL, BLEU: 8.0, chr-F: 0.347
testset: URL, BLEU: 0.4, chr-F: 0.197
testset: URL, BLEU: 0.5, chr-F: 0.108
testset: URL, BLEU: 9.1, chr-F: 0.364
testset: URL, BLEU: 4.4, chr-F: 0.284
testset: URL, BLEU: 13.3, chr-F: 0.423
### System Info:
* hf\_name: eng-inc
* source\_languages: eng
* target\_languages: inc
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']
* src\_constituents: {'eng'}
* tgt\_constituents: {'pnb', 'gom', 'ben', 'hif\_Latn', 'ori', 'guj', 'pan\_Guru', 'snd\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: inc
* short\_pair: en-inc
* chrF2\_score: 0.44799999999999995
* bleu: 17.3
* brevity\_penalty: 1.0
* ref\_len: 59917.0
* src\_name: English
* tgt\_name: Indic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: inc
* prefer\_old: False
* long\_pair: eng-inc
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-inc\n\n\n* source group: English\n* target group: Indic languages\n* OPUS readme: eng-inc\n* model: transformer\n* source language(s): eng\n* target language(s): asm awa ben bho gom guj hif\\_Latn hin mai mar npi ori pan\\_Guru pnb rom san\\_Deva sin snd\\_Arab urd\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.2, chr-F: 0.342\ntestset: URL, BLEU: 6.5, chr-F: 0.293\ntestset: URL, BLEU: 11.4, chr-F: 0.364\ntestset: URL, BLEU: 7.2, chr-F: 0.296\ntestset: URL, BLEU: 2.7, chr-F: 0.277\ntestset: URL, BLEU: 0.5, chr-F: 0.132\ntestset: URL, BLEU: 16.7, chr-F: 0.470\ntestset: URL, BLEU: 4.3, chr-F: 0.227\ntestset: URL, BLEU: 17.5, chr-F: 0.373\ntestset: URL, BLEU: 0.6, chr-F: 0.028\ntestset: URL, BLEU: 17.7, chr-F: 0.469\ntestset: URL, BLEU: 1.7, chr-F: 0.000\ntestset: URL, BLEU: 0.3, chr-F: 0.028\ntestset: URL, BLEU: 15.6, chr-F: 0.429\ntestset: URL, BLEU: 21.3, chr-F: 0.477\ntestset: URL, BLEU: 17.3, chr-F: 0.448\ntestset: URL, BLEU: 0.8, chr-F: 0.081\ntestset: URL, BLEU: 2.2, chr-F: 0.208\ntestset: URL, BLEU: 8.0, chr-F: 0.347\ntestset: URL, BLEU: 0.4, chr-F: 0.197\ntestset: URL, BLEU: 0.5, chr-F: 0.108\ntestset: URL, BLEU: 9.1, chr-F: 0.364\ntestset: URL, BLEU: 4.4, chr-F: 0.284\ntestset: URL, BLEU: 13.3, chr-F: 0.423",
"### System Info:\n\n\n* hf\\_name: eng-inc\n* source\\_languages: eng\n* target\\_languages: inc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: inc\n* short\\_pair: en-inc\n* chrF2\\_score: 0.44799999999999995\n* bleu: 17.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 59917.0\n* src\\_name: English\n* tgt\\_name: Indic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: inc\n* prefer\\_old: False\n* long\\_pair: eng-inc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bn #or #gu #mr #ur #hi #as #si #inc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-inc\n\n\n* source group: English\n* target group: Indic languages\n* OPUS readme: eng-inc\n* model: transformer\n* source language(s): eng\n* target language(s): asm awa ben bho gom guj hif\\_Latn hin mai mar npi ori pan\\_Guru pnb rom san\\_Deva sin snd\\_Arab urd\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.2, chr-F: 0.342\ntestset: URL, BLEU: 6.5, chr-F: 0.293\ntestset: URL, BLEU: 11.4, chr-F: 0.364\ntestset: URL, BLEU: 7.2, chr-F: 0.296\ntestset: URL, BLEU: 2.7, chr-F: 0.277\ntestset: URL, BLEU: 0.5, chr-F: 0.132\ntestset: URL, BLEU: 16.7, chr-F: 0.470\ntestset: URL, BLEU: 4.3, chr-F: 0.227\ntestset: URL, BLEU: 17.5, chr-F: 0.373\ntestset: URL, BLEU: 0.6, chr-F: 0.028\ntestset: URL, BLEU: 17.7, chr-F: 0.469\ntestset: URL, BLEU: 1.7, chr-F: 0.000\ntestset: URL, BLEU: 0.3, chr-F: 0.028\ntestset: URL, BLEU: 15.6, chr-F: 0.429\ntestset: URL, BLEU: 21.3, chr-F: 0.477\ntestset: URL, BLEU: 17.3, chr-F: 0.448\ntestset: URL, BLEU: 0.8, chr-F: 0.081\ntestset: URL, BLEU: 2.2, chr-F: 0.208\ntestset: URL, BLEU: 8.0, chr-F: 0.347\ntestset: URL, BLEU: 0.4, chr-F: 0.197\ntestset: URL, BLEU: 0.5, chr-F: 0.108\ntestset: URL, BLEU: 9.1, chr-F: 0.364\ntestset: URL, BLEU: 4.4, chr-F: 0.284\ntestset: URL, BLEU: 13.3, chr-F: 0.423",
"### System Info:\n\n\n* hf\\_name: eng-inc\n* source\\_languages: eng\n* target\\_languages: inc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: inc\n* short\\_pair: en-inc\n* chrF2\\_score: 0.44799999999999995\n* bleu: 17.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 59917.0\n* src\\_name: English\n* tgt\\_name: Indic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: inc\n* prefer\\_old: False\n* long\\_pair: eng-inc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
67,
719,
538
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #bn #or #gu #mr #ur #hi #as #si #inc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-inc\n\n\n* source group: English\n* target group: Indic languages\n* OPUS readme: eng-inc\n* model: transformer\n* source language(s): eng\n* target language(s): asm awa ben bho gom guj hif\\_Latn hin mai mar npi ori pan\\_Guru pnb rom san\\_Deva sin snd\\_Arab urd\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.2, chr-F: 0.342\ntestset: URL, BLEU: 6.5, chr-F: 0.293\ntestset: URL, BLEU: 11.4, chr-F: 0.364\ntestset: URL, BLEU: 7.2, chr-F: 0.296\ntestset: URL, BLEU: 2.7, chr-F: 0.277\ntestset: URL, BLEU: 0.5, chr-F: 0.132\ntestset: URL, BLEU: 16.7, chr-F: 0.470\ntestset: URL, BLEU: 4.3, chr-F: 0.227\ntestset: URL, BLEU: 17.5, chr-F: 0.373\ntestset: URL, BLEU: 0.6, chr-F: 0.028\ntestset: URL, BLEU: 17.7, chr-F: 0.469\ntestset: URL, BLEU: 1.7, chr-F: 0.000\ntestset: URL, BLEU: 0.3, chr-F: 0.028\ntestset: URL, BLEU: 15.6, chr-F: 0.429\ntestset: URL, BLEU: 21.3, chr-F: 0.477\ntestset: URL, BLEU: 17.3, chr-F: 0.448\ntestset: URL, BLEU: 0.8, chr-F: 0.081\ntestset: URL, BLEU: 2.2, chr-F: 0.208\ntestset: URL, BLEU: 8.0, chr-F: 0.347\ntestset: URL, BLEU: 0.4, chr-F: 0.197\ntestset: URL, BLEU: 0.5, chr-F: 0.108\ntestset: URL, BLEU: 9.1, chr-F: 0.364\ntestset: URL, BLEU: 4.4, chr-F: 0.284\ntestset: URL, BLEU: 13.3, chr-F: 0.423### System Info:\n\n\n* hf\\_name: eng-inc\n* source\\_languages: eng\n* target\\_languages: inc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: inc\n* short\\_pair: en-inc\n* chrF2\\_score: 0.44799999999999995\n* bleu: 17.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 59917.0\n* src\\_name: English\n* tgt\\_name: Indic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: inc\n* prefer\\_old: False\n* long\\_pair: eng-inc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-ine
* source group: English
* target group: Indo-European languages
* OPUS readme: [eng-ine](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md)
* model: transformer
* source language(s): eng
* target language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-enghin.eng.hin | 6.2 | 0.317 |
| newsdev2016-enro-engron.eng.ron | 22.1 | 0.525 |
| newsdev2017-enlv-englav.eng.lav | 17.4 | 0.486 |
| newsdev2019-engu-engguj.eng.guj | 6.5 | 0.303 |
| newsdev2019-enlt-englit.eng.lit | 14.9 | 0.476 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 26.4 | 0.547 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 30.0 | 0.575 |
| newssyscomb2009-engces.eng.ces | 14.7 | 0.442 |
| newssyscomb2009-engdeu.eng.deu | 16.7 | 0.487 |
| newssyscomb2009-engfra.eng.fra | 24.8 | 0.547 |
| newssyscomb2009-engita.eng.ita | 25.2 | 0.562 |
| newssyscomb2009-engspa.eng.spa | 27.0 | 0.554 |
| news-test2008-engces.eng.ces | 13.0 | 0.417 |
| news-test2008-engdeu.eng.deu | 17.4 | 0.480 |
| news-test2008-engfra.eng.fra | 22.3 | 0.519 |
| news-test2008-engspa.eng.spa | 24.9 | 0.532 |
| newstest2009-engces.eng.ces | 13.6 | 0.432 |
| newstest2009-engdeu.eng.deu | 16.6 | 0.482 |
| newstest2009-engfra.eng.fra | 23.5 | 0.535 |
| newstest2009-engita.eng.ita | 25.5 | 0.561 |
| newstest2009-engspa.eng.spa | 26.3 | 0.551 |
| newstest2010-engces.eng.ces | 14.2 | 0.436 |
| newstest2010-engdeu.eng.deu | 18.3 | 0.492 |
| newstest2010-engfra.eng.fra | 25.7 | 0.550 |
| newstest2010-engspa.eng.spa | 30.5 | 0.578 |
| newstest2011-engces.eng.ces | 15.1 | 0.439 |
| newstest2011-engdeu.eng.deu | 17.1 | 0.478 |
| newstest2011-engfra.eng.fra | 28.0 | 0.569 |
| newstest2011-engspa.eng.spa | 31.9 | 0.580 |
| newstest2012-engces.eng.ces | 13.6 | 0.418 |
| newstest2012-engdeu.eng.deu | 17.0 | 0.475 |
| newstest2012-engfra.eng.fra | 26.1 | 0.553 |
| newstest2012-engrus.eng.rus | 21.4 | 0.506 |
| newstest2012-engspa.eng.spa | 31.4 | 0.577 |
| newstest2013-engces.eng.ces | 15.3 | 0.438 |
| newstest2013-engdeu.eng.deu | 20.3 | 0.501 |
| newstest2013-engfra.eng.fra | 26.0 | 0.540 |
| newstest2013-engrus.eng.rus | 16.1 | 0.449 |
| newstest2013-engspa.eng.spa | 28.6 | 0.555 |
| newstest2014-hien-enghin.eng.hin | 9.5 | 0.344 |
| newstest2015-encs-engces.eng.ces | 14.8 | 0.440 |
| newstest2015-ende-engdeu.eng.deu | 22.6 | 0.523 |
| newstest2015-enru-engrus.eng.rus | 18.8 | 0.483 |
| newstest2016-encs-engces.eng.ces | 16.8 | 0.457 |
| newstest2016-ende-engdeu.eng.deu | 26.2 | 0.555 |
| newstest2016-enro-engron.eng.ron | 21.2 | 0.510 |
| newstest2016-enru-engrus.eng.rus | 17.6 | 0.471 |
| newstest2017-encs-engces.eng.ces | 13.6 | 0.421 |
| newstest2017-ende-engdeu.eng.deu | 21.5 | 0.516 |
| newstest2017-enlv-englav.eng.lav | 13.0 | 0.452 |
| newstest2017-enru-engrus.eng.rus | 18.7 | 0.486 |
| newstest2018-encs-engces.eng.ces | 13.5 | 0.425 |
| newstest2018-ende-engdeu.eng.deu | 29.8 | 0.581 |
| newstest2018-enru-engrus.eng.rus | 16.1 | 0.472 |
| newstest2019-encs-engces.eng.ces | 14.8 | 0.435 |
| newstest2019-ende-engdeu.eng.deu | 26.6 | 0.554 |
| newstest2019-engu-engguj.eng.guj | 6.9 | 0.313 |
| newstest2019-enlt-englit.eng.lit | 10.6 | 0.429 |
| newstest2019-enru-engrus.eng.rus | 17.5 | 0.452 |
| Tatoeba-test.eng-afr.eng.afr | 52.1 | 0.708 |
| Tatoeba-test.eng-ang.eng.ang | 5.1 | 0.131 |
| Tatoeba-test.eng-arg.eng.arg | 1.2 | 0.099 |
| Tatoeba-test.eng-asm.eng.asm | 2.9 | 0.259 |
| Tatoeba-test.eng-ast.eng.ast | 14.1 | 0.408 |
| Tatoeba-test.eng-awa.eng.awa | 0.3 | 0.002 |
| Tatoeba-test.eng-bel.eng.bel | 18.1 | 0.450 |
| Tatoeba-test.eng-ben.eng.ben | 13.5 | 0.432 |
| Tatoeba-test.eng-bho.eng.bho | 0.3 | 0.003 |
| Tatoeba-test.eng-bre.eng.bre | 10.4 | 0.318 |
| Tatoeba-test.eng-bul.eng.bul | 38.7 | 0.592 |
| Tatoeba-test.eng-cat.eng.cat | 42.0 | 0.633 |
| Tatoeba-test.eng-ces.eng.ces | 32.3 | 0.546 |
| Tatoeba-test.eng-cor.eng.cor | 0.5 | 0.079 |
| Tatoeba-test.eng-cos.eng.cos | 3.1 | 0.148 |
| Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.216 |
| Tatoeba-test.eng-cym.eng.cym | 22.4 | 0.470 |
| Tatoeba-test.eng-dan.eng.dan | 49.7 | 0.671 |
| Tatoeba-test.eng-deu.eng.deu | 31.7 | 0.554 |
| Tatoeba-test.eng-dsb.eng.dsb | 1.1 | 0.139 |
| Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.089 |
| Tatoeba-test.eng-ell.eng.ell | 42.7 | 0.640 |
| Tatoeba-test.eng-enm.eng.enm | 3.5 | 0.259 |
| Tatoeba-test.eng-ext.eng.ext | 6.4 | 0.235 |
| Tatoeba-test.eng-fao.eng.fao | 6.6 | 0.285 |
| Tatoeba-test.eng-fas.eng.fas | 5.7 | 0.257 |
| Tatoeba-test.eng-fra.eng.fra | 38.4 | 0.595 |
| Tatoeba-test.eng-frm.eng.frm | 0.9 | 0.149 |
| Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.145 |
| Tatoeba-test.eng-fry.eng.fry | 16.5 | 0.411 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.6 | 0.098 |
| Tatoeba-test.eng-gla.eng.gla | 11.6 | 0.361 |
| Tatoeba-test.eng-gle.eng.gle | 32.5 | 0.546 |
| Tatoeba-test.eng-glg.eng.glg | 38.4 | 0.602 |
| Tatoeba-test.eng-glv.eng.glv | 23.1 | 0.418 |
| Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.137 |
| Tatoeba-test.eng-got.eng.got | 0.2 | 0.010 |
| Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.108 |
| Tatoeba-test.eng-guj.eng.guj | 20.8 | 0.391 |
| Tatoeba-test.eng-hat.eng.hat | 34.0 | 0.537 |
| Tatoeba-test.eng-hbs.eng.hbs | 33.7 | 0.567 |
| Tatoeba-test.eng-hif.eng.hif | 2.8 | 0.269 |
| Tatoeba-test.eng-hin.eng.hin | 15.6 | 0.437 |
| Tatoeba-test.eng-hsb.eng.hsb | 5.4 | 0.320 |
| Tatoeba-test.eng-hye.eng.hye | 17.4 | 0.426 |
| Tatoeba-test.eng-isl.eng.isl | 17.4 | 0.436 |
| Tatoeba-test.eng-ita.eng.ita | 40.4 | 0.636 |
| Tatoeba-test.eng-jdt.eng.jdt | 6.4 | 0.008 |
| Tatoeba-test.eng-kok.eng.kok | 6.6 | 0.005 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.8 | 0.123 |
| Tatoeba-test.eng-kur.eng.kur | 10.2 | 0.209 |
| Tatoeba-test.eng-lad.eng.lad | 0.8 | 0.163 |
| Tatoeba-test.eng-lah.eng.lah | 0.2 | 0.001 |
| Tatoeba-test.eng-lat.eng.lat | 9.4 | 0.372 |
| Tatoeba-test.eng-lav.eng.lav | 30.3 | 0.559 |
| Tatoeba-test.eng-lij.eng.lij | 1.0 | 0.130 |
| Tatoeba-test.eng-lit.eng.lit | 25.3 | 0.560 |
| Tatoeba-test.eng-lld.eng.lld | 0.4 | 0.139 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.6 | 0.108 |
| Tatoeba-test.eng-ltz.eng.ltz | 18.1 | 0.388 |
| Tatoeba-test.eng-mai.eng.mai | 17.2 | 0.464 |
| Tatoeba-test.eng-mar.eng.mar | 18.0 | 0.451 |
| Tatoeba-test.eng-mfe.eng.mfe | 81.0 | 0.899 |
| Tatoeba-test.eng-mkd.eng.mkd | 37.6 | 0.587 |
| Tatoeba-test.eng-msa.eng.msa | 27.7 | 0.519 |
| Tatoeba-test.eng.multi | 32.6 | 0.539 |
| Tatoeba-test.eng-mwl.eng.mwl | 3.8 | 0.134 |
| Tatoeba-test.eng-nds.eng.nds | 14.3 | 0.401 |
| Tatoeba-test.eng-nep.eng.nep | 0.5 | 0.002 |
| Tatoeba-test.eng-nld.eng.nld | 44.0 | 0.642 |
| Tatoeba-test.eng-non.eng.non | 0.7 | 0.118 |
| Tatoeba-test.eng-nor.eng.nor | 42.7 | 0.623 |
| Tatoeba-test.eng-oci.eng.oci | 7.2 | 0.295 |
| Tatoeba-test.eng-ori.eng.ori | 2.7 | 0.257 |
| Tatoeba-test.eng-orv.eng.orv | 0.2 | 0.008 |
| Tatoeba-test.eng-oss.eng.oss | 2.9 | 0.264 |
| Tatoeba-test.eng-pan.eng.pan | 7.4 | 0.337 |
| Tatoeba-test.eng-pap.eng.pap | 48.5 | 0.656 |
| Tatoeba-test.eng-pdc.eng.pdc | 1.8 | 0.145 |
| Tatoeba-test.eng-pms.eng.pms | 0.7 | 0.136 |
| Tatoeba-test.eng-pol.eng.pol | 31.1 | 0.563 |
| Tatoeba-test.eng-por.eng.por | 37.0 | 0.605 |
| Tatoeba-test.eng-prg.eng.prg | 0.2 | 0.100 |
| Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.134 |
| Tatoeba-test.eng-roh.eng.roh | 2.3 | 0.236 |
| Tatoeba-test.eng-rom.eng.rom | 7.8 | 0.340 |
| Tatoeba-test.eng-ron.eng.ron | 34.3 | 0.585 |
| Tatoeba-test.eng-rue.eng.rue | 0.2 | 0.010 |
| Tatoeba-test.eng-rus.eng.rus | 29.6 | 0.526 |
| Tatoeba-test.eng-san.eng.san | 2.4 | 0.125 |
| Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.079 |
| Tatoeba-test.eng-sco.eng.sco | 33.6 | 0.562 |
| Tatoeba-test.eng-sgs.eng.sgs | 3.4 | 0.114 |
| Tatoeba-test.eng-sin.eng.sin | 9.2 | 0.349 |
| Tatoeba-test.eng-slv.eng.slv | 15.6 | 0.334 |
| Tatoeba-test.eng-snd.eng.snd | 9.1 | 0.324 |
| Tatoeba-test.eng-spa.eng.spa | 43.4 | 0.645 |
| Tatoeba-test.eng-sqi.eng.sqi | 39.0 | 0.621 |
| Tatoeba-test.eng-stq.eng.stq | 10.8 | 0.373 |
| Tatoeba-test.eng-swe.eng.swe | 49.9 | 0.663 |
| Tatoeba-test.eng-swg.eng.swg | 0.7 | 0.137 |
| Tatoeba-test.eng-tgk.eng.tgk | 6.4 | 0.346 |
| Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.055 |
| Tatoeba-test.eng-ukr.eng.ukr | 31.4 | 0.536 |
| Tatoeba-test.eng-urd.eng.urd | 11.1 | 0.389 |
| Tatoeba-test.eng-vec.eng.vec | 1.3 | 0.110 |
| Tatoeba-test.eng-wln.eng.wln | 6.8 | 0.233 |
| Tatoeba-test.eng-yid.eng.yid | 5.8 | 0.295 |
| Tatoeba-test.eng-zza.eng.zza | 0.8 | 0.086 |
### System Info:
- hf_name: eng-ine
- source_languages: eng
- target_languages: ine
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']
- src_constituents: {'eng'}
- tgt_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: ine
- short_pair: en-ine
- chrF2_score: 0.539
- bleu: 32.6
- brevity_penalty: 0.973
- ref_len: 68664.0
- src_name: English
- tgt_name: Indo-European languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: ine
- prefer_old: False
- long_pair: eng-ine
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "ca", "es", "os", "ro", "fy", "cy", "sc", "is", "yi", "lb", "an", "sq", "fr", "ht", "rm", "ps", "af", "uk", "sl", "lt", "bg", "be", "gd", "si", "br", "mk", "or", "mr", "ru", "fo", "co", "oc", "pl", "gl", "nb", "bn", "id", "hy", "da", "gv", "nl", "pt", "hi", "as", "kw", "ga", "sv", "gu", "wa", "lv", "el", "it", "hr", "ur", "nn", "de", "cs", "ine"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ine | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ca",
"es",
"os",
"ro",
"fy",
"cy",
"sc",
"is",
"yi",
"lb",
"an",
"sq",
"fr",
"ht",
"rm",
"ps",
"af",
"uk",
"sl",
"lt",
"bg",
"be",
"gd",
"si",
"br",
"mk",
"or",
"mr",
"ru",
"fo",
"co",
"oc",
"pl",
"gl",
"nb",
"bn",
"id",
"hy",
"da",
"gv",
"nl",
"pt",
"hi",
"as",
"kw",
"ga",
"sv",
"gu",
"wa",
"lv",
"el",
"it",
"hr",
"ur",
"nn",
"de",
"cs",
"ine",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ca",
"es",
"os",
"ro",
"fy",
"cy",
"sc",
"is",
"yi",
"lb",
"an",
"sq",
"fr",
"ht",
"rm",
"ps",
"af",
"uk",
"sl",
"lt",
"bg",
"be",
"gd",
"si",
"br",
"mk",
"or",
"mr",
"ru",
"fo",
"co",
"oc",
"pl",
"gl",
"nb",
"bn",
"id",
"hy",
"da",
"gv",
"nl",
"pt",
"hi",
"as",
"kw",
"ga",
"sv",
"gu",
"wa",
"lv",
"el",
"it",
"hr",
"ur",
"nn",
"de",
"cs",
"ine"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ca #es #os #ro #fy #cy #sc #is #yi #lb #an #sq #fr #ht #rm #ps #af #uk #sl #lt #bg #be #gd #si #br #mk #or #mr #ru #fo #co #oc #pl #gl #nb #bn #id #hy #da #gv #nl #pt #hi #as #kw #ga #sv #gu #wa #lv #el #it #hr #ur #nn #de #cs #ine #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-ine
* source group: English
* target group: Indo-European languages
* OPUS readme: eng-ine
* model: transformer
* source language(s): eng
* target language(s): afr aln ang\_Latn arg asm ast awa bel bel\_Latn ben bho bos\_Latn bre bul bul\_Latn cat ces cor cos csb\_Latn cym dan deu dsb egl ell enm\_Latn ext fao fra frm\_Latn frr fry gcf\_Latn gla gle glg glv gom gos got\_Goth grc\_Grek gsw guj hat hif\_Latn hin hrv hsb hye ind isl ita jdt\_Cyrl ksh kur\_Arab kur\_Latn lad lad\_Latn lat\_Latn lav lij lit lld\_Latn lmo ltg ltz mai mar max\_Latn mfe min mkd mwl nds nld nno nob nob\_Hebr non\_Latn npi oci ori orv\_Cyrl oss pan\_Guru pap pdc pes pes\_Latn pes\_Thaa pms pnb pol por prg\_Latn pus roh rom ron rue rus san\_Deva scn sco sgs sin slv snd\_Arab spa sqi srp\_Cyrl srp\_Latn stq swe swg tgk\_Cyrl tly\_Latn tmw\_Latn ukr urd vec wln yid zlm\_Latn zsm\_Latn zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 6.2, chr-F: 0.317
testset: URL, BLEU: 22.1, chr-F: 0.525
testset: URL, BLEU: 17.4, chr-F: 0.486
testset: URL, BLEU: 6.5, chr-F: 0.303
testset: URL, BLEU: 14.9, chr-F: 0.476
testset: URL, BLEU: 26.4, chr-F: 0.547
testset: URL, BLEU: 30.0, chr-F: 0.575
testset: URL, BLEU: 14.7, chr-F: 0.442
testset: URL, BLEU: 16.7, chr-F: 0.487
testset: URL, BLEU: 24.8, chr-F: 0.547
testset: URL, BLEU: 25.2, chr-F: 0.562
testset: URL, BLEU: 27.0, chr-F: 0.554
testset: URL, BLEU: 13.0, chr-F: 0.417
testset: URL, BLEU: 17.4, chr-F: 0.480
testset: URL, BLEU: 22.3, chr-F: 0.519
testset: URL, BLEU: 24.9, chr-F: 0.532
testset: URL, BLEU: 13.6, chr-F: 0.432
testset: URL, BLEU: 16.6, chr-F: 0.482
testset: URL, BLEU: 23.5, chr-F: 0.535
testset: URL, BLEU: 25.5, chr-F: 0.561
testset: URL, BLEU: 26.3, chr-F: 0.551
testset: URL, BLEU: 14.2, chr-F: 0.436
testset: URL, BLEU: 18.3, chr-F: 0.492
testset: URL, BLEU: 25.7, chr-F: 0.550
testset: URL, BLEU: 30.5, chr-F: 0.578
testset: URL, BLEU: 15.1, chr-F: 0.439
testset: URL, BLEU: 17.1, chr-F: 0.478
testset: URL, BLEU: 28.0, chr-F: 0.569
testset: URL, BLEU: 31.9, chr-F: 0.580
testset: URL, BLEU: 13.6, chr-F: 0.418
testset: URL, BLEU: 17.0, chr-F: 0.475
testset: URL, BLEU: 26.1, chr-F: 0.553
testset: URL, BLEU: 21.4, chr-F: 0.506
testset: URL, BLEU: 31.4, chr-F: 0.577
testset: URL, BLEU: 15.3, chr-F: 0.438
testset: URL, BLEU: 20.3, chr-F: 0.501
testset: URL, BLEU: 26.0, chr-F: 0.540
testset: URL, BLEU: 16.1, chr-F: 0.449
testset: URL, BLEU: 28.6, chr-F: 0.555
testset: URL, BLEU: 9.5, chr-F: 0.344
testset: URL, BLEU: 14.8, chr-F: 0.440
testset: URL, BLEU: 22.6, chr-F: 0.523
testset: URL, BLEU: 18.8, chr-F: 0.483
testset: URL, BLEU: 16.8, chr-F: 0.457
testset: URL, BLEU: 26.2, chr-F: 0.555
testset: URL, BLEU: 21.2, chr-F: 0.510
testset: URL, BLEU: 17.6, chr-F: 0.471
testset: URL, BLEU: 13.6, chr-F: 0.421
testset: URL, BLEU: 21.5, chr-F: 0.516
testset: URL, BLEU: 13.0, chr-F: 0.452
testset: URL, BLEU: 18.7, chr-F: 0.486
testset: URL, BLEU: 13.5, chr-F: 0.425
testset: URL, BLEU: 29.8, chr-F: 0.581
testset: URL, BLEU: 16.1, chr-F: 0.472
testset: URL, BLEU: 14.8, chr-F: 0.435
testset: URL, BLEU: 26.6, chr-F: 0.554
testset: URL, BLEU: 6.9, chr-F: 0.313
testset: URL, BLEU: 10.6, chr-F: 0.429
testset: URL, BLEU: 17.5, chr-F: 0.452
testset: URL, BLEU: 52.1, chr-F: 0.708
testset: URL, BLEU: 5.1, chr-F: 0.131
testset: URL, BLEU: 1.2, chr-F: 0.099
testset: URL, BLEU: 2.9, chr-F: 0.259
testset: URL, BLEU: 14.1, chr-F: 0.408
testset: URL, BLEU: 0.3, chr-F: 0.002
testset: URL, BLEU: 18.1, chr-F: 0.450
testset: URL, BLEU: 13.5, chr-F: 0.432
testset: URL, BLEU: 0.3, chr-F: 0.003
testset: URL, BLEU: 10.4, chr-F: 0.318
testset: URL, BLEU: 38.7, chr-F: 0.592
testset: URL, BLEU: 42.0, chr-F: 0.633
testset: URL, BLEU: 32.3, chr-F: 0.546
testset: URL, BLEU: 0.5, chr-F: 0.079
testset: URL, BLEU: 3.1, chr-F: 0.148
testset: URL, BLEU: 1.4, chr-F: 0.216
testset: URL, BLEU: 22.4, chr-F: 0.470
testset: URL, BLEU: 49.7, chr-F: 0.671
testset: URL, BLEU: 31.7, chr-F: 0.554
testset: URL, BLEU: 1.1, chr-F: 0.139
testset: URL, BLEU: 0.9, chr-F: 0.089
testset: URL, BLEU: 42.7, chr-F: 0.640
testset: URL, BLEU: 3.5, chr-F: 0.259
testset: URL, BLEU: 6.4, chr-F: 0.235
testset: URL, BLEU: 6.6, chr-F: 0.285
testset: URL, BLEU: 5.7, chr-F: 0.257
testset: URL, BLEU: 38.4, chr-F: 0.595
testset: URL, BLEU: 0.9, chr-F: 0.149
testset: URL, BLEU: 8.4, chr-F: 0.145
testset: URL, BLEU: 16.5, chr-F: 0.411
testset: URL, BLEU: 0.6, chr-F: 0.098
testset: URL, BLEU: 11.6, chr-F: 0.361
testset: URL, BLEU: 32.5, chr-F: 0.546
testset: URL, BLEU: 38.4, chr-F: 0.602
testset: URL, BLEU: 23.1, chr-F: 0.418
testset: URL, BLEU: 0.7, chr-F: 0.137
testset: URL, BLEU: 0.2, chr-F: 0.010
testset: URL, BLEU: 0.0, chr-F: 0.005
testset: URL, BLEU: 0.9, chr-F: 0.108
testset: URL, BLEU: 20.8, chr-F: 0.391
testset: URL, BLEU: 34.0, chr-F: 0.537
testset: URL, BLEU: 33.7, chr-F: 0.567
testset: URL, BLEU: 2.8, chr-F: 0.269
testset: URL, BLEU: 15.6, chr-F: 0.437
testset: URL, BLEU: 5.4, chr-F: 0.320
testset: URL, BLEU: 17.4, chr-F: 0.426
testset: URL, BLEU: 17.4, chr-F: 0.436
testset: URL, BLEU: 40.4, chr-F: 0.636
testset: URL, BLEU: 6.4, chr-F: 0.008
testset: URL, BLEU: 6.6, chr-F: 0.005
testset: URL, BLEU: 0.8, chr-F: 0.123
testset: URL, BLEU: 10.2, chr-F: 0.209
testset: URL, BLEU: 0.8, chr-F: 0.163
testset: URL, BLEU: 0.2, chr-F: 0.001
testset: URL, BLEU: 9.4, chr-F: 0.372
testset: URL, BLEU: 30.3, chr-F: 0.559
testset: URL, BLEU: 1.0, chr-F: 0.130
testset: URL, BLEU: 25.3, chr-F: 0.560
testset: URL, BLEU: 0.4, chr-F: 0.139
testset: URL, BLEU: 0.6, chr-F: 0.108
testset: URL, BLEU: 18.1, chr-F: 0.388
testset: URL, BLEU: 17.2, chr-F: 0.464
testset: URL, BLEU: 18.0, chr-F: 0.451
testset: URL, BLEU: 81.0, chr-F: 0.899
testset: URL, BLEU: 37.6, chr-F: 0.587
testset: URL, BLEU: 27.7, chr-F: 0.519
testset: URL, BLEU: 32.6, chr-F: 0.539
testset: URL, BLEU: 3.8, chr-F: 0.134
testset: URL, BLEU: 14.3, chr-F: 0.401
testset: URL, BLEU: 0.5, chr-F: 0.002
testset: URL, BLEU: 44.0, chr-F: 0.642
testset: URL, BLEU: 0.7, chr-F: 0.118
testset: URL, BLEU: 42.7, chr-F: 0.623
testset: URL, BLEU: 7.2, chr-F: 0.295
testset: URL, BLEU: 2.7, chr-F: 0.257
testset: URL, BLEU: 0.2, chr-F: 0.008
testset: URL, BLEU: 2.9, chr-F: 0.264
testset: URL, BLEU: 7.4, chr-F: 0.337
testset: URL, BLEU: 48.5, chr-F: 0.656
testset: URL, BLEU: 1.8, chr-F: 0.145
testset: URL, BLEU: 0.7, chr-F: 0.136
testset: URL, BLEU: 31.1, chr-F: 0.563
testset: URL, BLEU: 37.0, chr-F: 0.605
testset: URL, BLEU: 0.2, chr-F: 0.100
testset: URL, BLEU: 1.0, chr-F: 0.134
testset: URL, BLEU: 2.3, chr-F: 0.236
testset: URL, BLEU: 7.8, chr-F: 0.340
testset: URL, BLEU: 34.3, chr-F: 0.585
testset: URL, BLEU: 0.2, chr-F: 0.010
testset: URL, BLEU: 29.6, chr-F: 0.526
testset: URL, BLEU: 2.4, chr-F: 0.125
testset: URL, BLEU: 1.6, chr-F: 0.079
testset: URL, BLEU: 33.6, chr-F: 0.562
testset: URL, BLEU: 3.4, chr-F: 0.114
testset: URL, BLEU: 9.2, chr-F: 0.349
testset: URL, BLEU: 15.6, chr-F: 0.334
testset: URL, BLEU: 9.1, chr-F: 0.324
testset: URL, BLEU: 43.4, chr-F: 0.645
testset: URL, BLEU: 39.0, chr-F: 0.621
testset: URL, BLEU: 10.8, chr-F: 0.373
testset: URL, BLEU: 49.9, chr-F: 0.663
testset: URL, BLEU: 0.7, chr-F: 0.137
testset: URL, BLEU: 6.4, chr-F: 0.346
testset: URL, BLEU: 0.5, chr-F: 0.055
testset: URL, BLEU: 31.4, chr-F: 0.536
testset: URL, BLEU: 11.1, chr-F: 0.389
testset: URL, BLEU: 1.3, chr-F: 0.110
testset: URL, BLEU: 6.8, chr-F: 0.233
testset: URL, BLEU: 5.8, chr-F: 0.295
testset: URL, BLEU: 0.8, chr-F: 0.086
### System Info:
* hf\_name: eng-ine
* source\_languages: eng
* target\_languages: ine
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']
* src\_constituents: {'eng'}
* tgt\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\_Latn', 'lad\_Latn', 'lat\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\_Latn', 'srd', 'gcf\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\_Latn', 'arg', 'pes\_Thaa', 'sqi', 'csb\_Latn', 'fra', 'hat', 'non\_Latn', 'sco', 'pnb', 'roh', 'bul\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\_Latn', 'hsb', 'tly\_Latn', 'bul', 'bel', 'got\_Goth', 'lat\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\_Latn', 'eng', 'bre', 'nob\_Hebr', 'prg\_Latn', 'ang\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\_Arab', 'san\_Deva', 'gos', 'rus', 'fao', 'orv\_Cyrl', 'bel\_Latn', 'cos', 'zza', 'grc\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\_Cyrl', 'hye\_Latn', 'pdc', 'srp\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\_Latn', 'zlm\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\_Latn', 'jdt\_Cyrl', 'gsw', 'glv', 'nld', 'snd\_Arab', 'kur\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\_Latn', 'ksh', 'pan\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\_Latn', 'ces', 'egl', 'vec', 'max\_Latn', 'pes\_Latn', 'ltg', 'nds'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: ine
* short\_pair: en-ine
* chrF2\_score: 0.539
* bleu: 32.6
* brevity\_penalty: 0.973
* ref\_len: 68664.0
* src\_name: English
* tgt\_name: Indo-European languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: ine
* prefer\_old: False
* long\_pair: eng-ine
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-ine\n\n\n* source group: English\n* target group: Indo-European languages\n* OPUS readme: eng-ine\n* model: transformer\n* source language(s): eng\n* target language(s): afr aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.2, chr-F: 0.317\ntestset: URL, BLEU: 22.1, chr-F: 0.525\ntestset: URL, BLEU: 17.4, chr-F: 0.486\ntestset: URL, BLEU: 6.5, chr-F: 0.303\ntestset: URL, BLEU: 14.9, chr-F: 0.476\ntestset: URL, BLEU: 26.4, chr-F: 0.547\ntestset: URL, BLEU: 30.0, chr-F: 0.575\ntestset: URL, BLEU: 14.7, chr-F: 0.442\ntestset: URL, BLEU: 16.7, chr-F: 0.487\ntestset: URL, BLEU: 24.8, chr-F: 0.547\ntestset: URL, BLEU: 25.2, chr-F: 0.562\ntestset: URL, BLEU: 27.0, chr-F: 0.554\ntestset: URL, BLEU: 13.0, chr-F: 0.417\ntestset: URL, BLEU: 17.4, chr-F: 0.480\ntestset: URL, BLEU: 22.3, chr-F: 0.519\ntestset: URL, BLEU: 24.9, chr-F: 0.532\ntestset: URL, BLEU: 13.6, chr-F: 0.432\ntestset: URL, BLEU: 16.6, chr-F: 0.482\ntestset: URL, BLEU: 23.5, chr-F: 0.535\ntestset: URL, BLEU: 25.5, chr-F: 0.561\ntestset: URL, BLEU: 26.3, chr-F: 0.551\ntestset: URL, BLEU: 14.2, chr-F: 0.436\ntestset: URL, BLEU: 18.3, chr-F: 0.492\ntestset: URL, BLEU: 25.7, chr-F: 0.550\ntestset: URL, BLEU: 30.5, chr-F: 0.578\ntestset: URL, BLEU: 15.1, chr-F: 0.439\ntestset: URL, BLEU: 17.1, chr-F: 0.478\ntestset: URL, BLEU: 28.0, chr-F: 0.569\ntestset: URL, BLEU: 31.9, chr-F: 0.580\ntestset: URL, BLEU: 13.6, chr-F: 0.418\ntestset: URL, BLEU: 17.0, chr-F: 0.475\ntestset: URL, BLEU: 26.1, chr-F: 0.553\ntestset: URL, BLEU: 21.4, chr-F: 0.506\ntestset: URL, BLEU: 31.4, chr-F: 0.577\ntestset: URL, BLEU: 15.3, chr-F: 0.438\ntestset: URL, BLEU: 20.3, chr-F: 0.501\ntestset: URL, BLEU: 26.0, chr-F: 0.540\ntestset: URL, BLEU: 16.1, chr-F: 0.449\ntestset: URL, BLEU: 28.6, chr-F: 0.555\ntestset: URL, BLEU: 9.5, chr-F: 0.344\ntestset: URL, BLEU: 14.8, chr-F: 0.440\ntestset: URL, BLEU: 22.6, chr-F: 0.523\ntestset: URL, BLEU: 18.8, chr-F: 0.483\ntestset: URL, BLEU: 16.8, chr-F: 0.457\ntestset: URL, BLEU: 26.2, chr-F: 0.555\ntestset: URL, BLEU: 21.2, chr-F: 0.510\ntestset: URL, BLEU: 17.6, chr-F: 0.471\ntestset: URL, BLEU: 13.6, chr-F: 0.421\ntestset: URL, BLEU: 21.5, chr-F: 0.516\ntestset: URL, BLEU: 13.0, chr-F: 0.452\ntestset: URL, BLEU: 18.7, chr-F: 0.486\ntestset: URL, BLEU: 13.5, chr-F: 0.425\ntestset: URL, BLEU: 29.8, chr-F: 0.581\ntestset: URL, BLEU: 16.1, chr-F: 0.472\ntestset: URL, BLEU: 14.8, chr-F: 0.435\ntestset: URL, BLEU: 26.6, chr-F: 0.554\ntestset: URL, BLEU: 6.9, chr-F: 0.313\ntestset: URL, BLEU: 10.6, chr-F: 0.429\ntestset: URL, BLEU: 17.5, chr-F: 0.452\ntestset: URL, BLEU: 52.1, chr-F: 0.708\ntestset: URL, BLEU: 5.1, chr-F: 0.131\ntestset: URL, BLEU: 1.2, chr-F: 0.099\ntestset: URL, BLEU: 2.9, chr-F: 0.259\ntestset: URL, BLEU: 14.1, chr-F: 0.408\ntestset: URL, BLEU: 0.3, chr-F: 0.002\ntestset: URL, BLEU: 18.1, chr-F: 0.450\ntestset: URL, BLEU: 13.5, chr-F: 0.432\ntestset: URL, BLEU: 0.3, chr-F: 0.003\ntestset: URL, BLEU: 10.4, chr-F: 0.318\ntestset: URL, BLEU: 38.7, chr-F: 0.592\ntestset: URL, BLEU: 42.0, chr-F: 0.633\ntestset: URL, BLEU: 32.3, chr-F: 0.546\ntestset: URL, BLEU: 0.5, chr-F: 0.079\ntestset: URL, BLEU: 3.1, chr-F: 0.148\ntestset: URL, BLEU: 1.4, chr-F: 0.216\ntestset: URL, BLEU: 22.4, chr-F: 0.470\ntestset: URL, BLEU: 49.7, chr-F: 0.671\ntestset: URL, BLEU: 31.7, chr-F: 0.554\ntestset: URL, BLEU: 1.1, chr-F: 0.139\ntestset: URL, BLEU: 0.9, chr-F: 0.089\ntestset: URL, BLEU: 42.7, chr-F: 0.640\ntestset: URL, BLEU: 3.5, chr-F: 0.259\ntestset: URL, BLEU: 6.4, chr-F: 0.235\ntestset: URL, BLEU: 6.6, chr-F: 0.285\ntestset: URL, BLEU: 5.7, chr-F: 0.257\ntestset: URL, BLEU: 38.4, chr-F: 0.595\ntestset: URL, BLEU: 0.9, chr-F: 0.149\ntestset: URL, BLEU: 8.4, chr-F: 0.145\ntestset: URL, BLEU: 16.5, chr-F: 0.411\ntestset: URL, BLEU: 0.6, chr-F: 0.098\ntestset: URL, BLEU: 11.6, chr-F: 0.361\ntestset: URL, BLEU: 32.5, chr-F: 0.546\ntestset: URL, BLEU: 38.4, chr-F: 0.602\ntestset: URL, BLEU: 23.1, chr-F: 0.418\ntestset: URL, BLEU: 0.7, chr-F: 0.137\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.0, chr-F: 0.005\ntestset: URL, BLEU: 0.9, chr-F: 0.108\ntestset: URL, BLEU: 20.8, chr-F: 0.391\ntestset: URL, BLEU: 34.0, chr-F: 0.537\ntestset: URL, BLEU: 33.7, chr-F: 0.567\ntestset: URL, BLEU: 2.8, chr-F: 0.269\ntestset: URL, BLEU: 15.6, chr-F: 0.437\ntestset: URL, BLEU: 5.4, chr-F: 0.320\ntestset: URL, BLEU: 17.4, chr-F: 0.426\ntestset: URL, BLEU: 17.4, chr-F: 0.436\ntestset: URL, BLEU: 40.4, chr-F: 0.636\ntestset: URL, BLEU: 6.4, chr-F: 0.008\ntestset: URL, BLEU: 6.6, chr-F: 0.005\ntestset: URL, BLEU: 0.8, chr-F: 0.123\ntestset: URL, BLEU: 10.2, chr-F: 0.209\ntestset: URL, BLEU: 0.8, chr-F: 0.163\ntestset: URL, BLEU: 0.2, chr-F: 0.001\ntestset: URL, BLEU: 9.4, chr-F: 0.372\ntestset: URL, BLEU: 30.3, chr-F: 0.559\ntestset: URL, BLEU: 1.0, chr-F: 0.130\ntestset: URL, BLEU: 25.3, chr-F: 0.560\ntestset: URL, BLEU: 0.4, chr-F: 0.139\ntestset: URL, BLEU: 0.6, chr-F: 0.108\ntestset: URL, BLEU: 18.1, chr-F: 0.388\ntestset: URL, BLEU: 17.2, chr-F: 0.464\ntestset: URL, BLEU: 18.0, chr-F: 0.451\ntestset: URL, BLEU: 81.0, chr-F: 0.899\ntestset: URL, BLEU: 37.6, chr-F: 0.587\ntestset: URL, BLEU: 27.7, chr-F: 0.519\ntestset: URL, BLEU: 32.6, chr-F: 0.539\ntestset: URL, BLEU: 3.8, chr-F: 0.134\ntestset: URL, BLEU: 14.3, chr-F: 0.401\ntestset: URL, BLEU: 0.5, chr-F: 0.002\ntestset: URL, BLEU: 44.0, chr-F: 0.642\ntestset: URL, BLEU: 0.7, chr-F: 0.118\ntestset: URL, BLEU: 42.7, chr-F: 0.623\ntestset: URL, BLEU: 7.2, chr-F: 0.295\ntestset: URL, BLEU: 2.7, chr-F: 0.257\ntestset: URL, BLEU: 0.2, chr-F: 0.008\ntestset: URL, BLEU: 2.9, chr-F: 0.264\ntestset: URL, BLEU: 7.4, chr-F: 0.337\ntestset: URL, BLEU: 48.5, chr-F: 0.656\ntestset: URL, BLEU: 1.8, chr-F: 0.145\ntestset: URL, BLEU: 0.7, chr-F: 0.136\ntestset: URL, BLEU: 31.1, chr-F: 0.563\ntestset: URL, BLEU: 37.0, chr-F: 0.605\ntestset: URL, BLEU: 0.2, chr-F: 0.100\ntestset: URL, BLEU: 1.0, chr-F: 0.134\ntestset: URL, BLEU: 2.3, chr-F: 0.236\ntestset: URL, BLEU: 7.8, chr-F: 0.340\ntestset: URL, BLEU: 34.3, chr-F: 0.585\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 29.6, chr-F: 0.526\ntestset: URL, BLEU: 2.4, chr-F: 0.125\ntestset: URL, BLEU: 1.6, chr-F: 0.079\ntestset: URL, BLEU: 33.6, chr-F: 0.562\ntestset: URL, BLEU: 3.4, chr-F: 0.114\ntestset: URL, BLEU: 9.2, chr-F: 0.349\ntestset: URL, BLEU: 15.6, chr-F: 0.334\ntestset: URL, BLEU: 9.1, chr-F: 0.324\ntestset: URL, BLEU: 43.4, chr-F: 0.645\ntestset: URL, BLEU: 39.0, chr-F: 0.621\ntestset: URL, BLEU: 10.8, chr-F: 0.373\ntestset: URL, BLEU: 49.9, chr-F: 0.663\ntestset: URL, BLEU: 0.7, chr-F: 0.137\ntestset: URL, BLEU: 6.4, chr-F: 0.346\ntestset: URL, BLEU: 0.5, chr-F: 0.055\ntestset: URL, BLEU: 31.4, chr-F: 0.536\ntestset: URL, BLEU: 11.1, chr-F: 0.389\ntestset: URL, BLEU: 1.3, chr-F: 0.110\ntestset: URL, BLEU: 6.8, chr-F: 0.233\ntestset: URL, BLEU: 5.8, chr-F: 0.295\ntestset: URL, BLEU: 0.8, chr-F: 0.086",
"### System Info:\n\n\n* hf\\_name: eng-ine\n* source\\_languages: eng\n* target\\_languages: ine\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: ine\n* short\\_pair: en-ine\n* chrF2\\_score: 0.539\n* bleu: 32.6\n* brevity\\_penalty: 0.973\n* ref\\_len: 68664.0\n* src\\_name: English\n* tgt\\_name: Indo-European languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: ine\n* prefer\\_old: False\n* long\\_pair: eng-ine\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ca #es #os #ro #fy #cy #sc #is #yi #lb #an #sq #fr #ht #rm #ps #af #uk #sl #lt #bg #be #gd #si #br #mk #or #mr #ru #fo #co #oc #pl #gl #nb #bn #id #hy #da #gv #nl #pt #hi #as #kw #ga #sv #gu #wa #lv #el #it #hr #ur #nn #de #cs #ine #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-ine\n\n\n* source group: English\n* target group: Indo-European languages\n* OPUS readme: eng-ine\n* model: transformer\n* source language(s): eng\n* target language(s): afr aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.2, chr-F: 0.317\ntestset: URL, BLEU: 22.1, chr-F: 0.525\ntestset: URL, BLEU: 17.4, chr-F: 0.486\ntestset: URL, BLEU: 6.5, chr-F: 0.303\ntestset: URL, BLEU: 14.9, chr-F: 0.476\ntestset: URL, BLEU: 26.4, chr-F: 0.547\ntestset: URL, BLEU: 30.0, chr-F: 0.575\ntestset: URL, BLEU: 14.7, chr-F: 0.442\ntestset: URL, BLEU: 16.7, chr-F: 0.487\ntestset: URL, BLEU: 24.8, chr-F: 0.547\ntestset: URL, BLEU: 25.2, chr-F: 0.562\ntestset: URL, BLEU: 27.0, chr-F: 0.554\ntestset: URL, BLEU: 13.0, chr-F: 0.417\ntestset: URL, BLEU: 17.4, chr-F: 0.480\ntestset: URL, BLEU: 22.3, chr-F: 0.519\ntestset: URL, BLEU: 24.9, chr-F: 0.532\ntestset: URL, BLEU: 13.6, chr-F: 0.432\ntestset: URL, BLEU: 16.6, chr-F: 0.482\ntestset: URL, BLEU: 23.5, chr-F: 0.535\ntestset: URL, BLEU: 25.5, chr-F: 0.561\ntestset: URL, BLEU: 26.3, chr-F: 0.551\ntestset: URL, BLEU: 14.2, chr-F: 0.436\ntestset: URL, BLEU: 18.3, chr-F: 0.492\ntestset: URL, BLEU: 25.7, chr-F: 0.550\ntestset: URL, BLEU: 30.5, chr-F: 0.578\ntestset: URL, BLEU: 15.1, chr-F: 0.439\ntestset: URL, BLEU: 17.1, chr-F: 0.478\ntestset: URL, BLEU: 28.0, chr-F: 0.569\ntestset: URL, BLEU: 31.9, chr-F: 0.580\ntestset: URL, BLEU: 13.6, chr-F: 0.418\ntestset: URL, BLEU: 17.0, chr-F: 0.475\ntestset: URL, BLEU: 26.1, chr-F: 0.553\ntestset: URL, BLEU: 21.4, chr-F: 0.506\ntestset: URL, BLEU: 31.4, chr-F: 0.577\ntestset: URL, BLEU: 15.3, chr-F: 0.438\ntestset: URL, BLEU: 20.3, chr-F: 0.501\ntestset: URL, BLEU: 26.0, chr-F: 0.540\ntestset: URL, BLEU: 16.1, chr-F: 0.449\ntestset: URL, BLEU: 28.6, chr-F: 0.555\ntestset: URL, BLEU: 9.5, chr-F: 0.344\ntestset: URL, BLEU: 14.8, chr-F: 0.440\ntestset: URL, BLEU: 22.6, chr-F: 0.523\ntestset: URL, BLEU: 18.8, chr-F: 0.483\ntestset: URL, BLEU: 16.8, chr-F: 0.457\ntestset: URL, BLEU: 26.2, chr-F: 0.555\ntestset: URL, BLEU: 21.2, chr-F: 0.510\ntestset: URL, BLEU: 17.6, chr-F: 0.471\ntestset: URL, BLEU: 13.6, chr-F: 0.421\ntestset: URL, BLEU: 21.5, chr-F: 0.516\ntestset: URL, BLEU: 13.0, chr-F: 0.452\ntestset: URL, BLEU: 18.7, chr-F: 0.486\ntestset: URL, BLEU: 13.5, chr-F: 0.425\ntestset: URL, BLEU: 29.8, chr-F: 0.581\ntestset: URL, BLEU: 16.1, chr-F: 0.472\ntestset: URL, BLEU: 14.8, chr-F: 0.435\ntestset: URL, BLEU: 26.6, chr-F: 0.554\ntestset: URL, BLEU: 6.9, chr-F: 0.313\ntestset: URL, BLEU: 10.6, chr-F: 0.429\ntestset: URL, BLEU: 17.5, chr-F: 0.452\ntestset: URL, BLEU: 52.1, chr-F: 0.708\ntestset: URL, BLEU: 5.1, chr-F: 0.131\ntestset: URL, BLEU: 1.2, chr-F: 0.099\ntestset: URL, BLEU: 2.9, chr-F: 0.259\ntestset: URL, BLEU: 14.1, chr-F: 0.408\ntestset: URL, BLEU: 0.3, chr-F: 0.002\ntestset: URL, BLEU: 18.1, chr-F: 0.450\ntestset: URL, BLEU: 13.5, chr-F: 0.432\ntestset: URL, BLEU: 0.3, chr-F: 0.003\ntestset: URL, BLEU: 10.4, chr-F: 0.318\ntestset: URL, BLEU: 38.7, chr-F: 0.592\ntestset: URL, BLEU: 42.0, chr-F: 0.633\ntestset: URL, BLEU: 32.3, chr-F: 0.546\ntestset: URL, BLEU: 0.5, chr-F: 0.079\ntestset: URL, BLEU: 3.1, chr-F: 0.148\ntestset: URL, BLEU: 1.4, chr-F: 0.216\ntestset: URL, BLEU: 22.4, chr-F: 0.470\ntestset: URL, BLEU: 49.7, chr-F: 0.671\ntestset: URL, BLEU: 31.7, chr-F: 0.554\ntestset: URL, BLEU: 1.1, chr-F: 0.139\ntestset: URL, BLEU: 0.9, chr-F: 0.089\ntestset: URL, BLEU: 42.7, chr-F: 0.640\ntestset: URL, BLEU: 3.5, chr-F: 0.259\ntestset: URL, BLEU: 6.4, chr-F: 0.235\ntestset: URL, BLEU: 6.6, chr-F: 0.285\ntestset: URL, BLEU: 5.7, chr-F: 0.257\ntestset: URL, BLEU: 38.4, chr-F: 0.595\ntestset: URL, BLEU: 0.9, chr-F: 0.149\ntestset: URL, BLEU: 8.4, chr-F: 0.145\ntestset: URL, BLEU: 16.5, chr-F: 0.411\ntestset: URL, BLEU: 0.6, chr-F: 0.098\ntestset: URL, BLEU: 11.6, chr-F: 0.361\ntestset: URL, BLEU: 32.5, chr-F: 0.546\ntestset: URL, BLEU: 38.4, chr-F: 0.602\ntestset: URL, BLEU: 23.1, chr-F: 0.418\ntestset: URL, BLEU: 0.7, chr-F: 0.137\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.0, chr-F: 0.005\ntestset: URL, BLEU: 0.9, chr-F: 0.108\ntestset: URL, BLEU: 20.8, chr-F: 0.391\ntestset: URL, BLEU: 34.0, chr-F: 0.537\ntestset: URL, BLEU: 33.7, chr-F: 0.567\ntestset: URL, BLEU: 2.8, chr-F: 0.269\ntestset: URL, BLEU: 15.6, chr-F: 0.437\ntestset: URL, BLEU: 5.4, chr-F: 0.320\ntestset: URL, BLEU: 17.4, chr-F: 0.426\ntestset: URL, BLEU: 17.4, chr-F: 0.436\ntestset: URL, BLEU: 40.4, chr-F: 0.636\ntestset: URL, BLEU: 6.4, chr-F: 0.008\ntestset: URL, BLEU: 6.6, chr-F: 0.005\ntestset: URL, BLEU: 0.8, chr-F: 0.123\ntestset: URL, BLEU: 10.2, chr-F: 0.209\ntestset: URL, BLEU: 0.8, chr-F: 0.163\ntestset: URL, BLEU: 0.2, chr-F: 0.001\ntestset: URL, BLEU: 9.4, chr-F: 0.372\ntestset: URL, BLEU: 30.3, chr-F: 0.559\ntestset: URL, BLEU: 1.0, chr-F: 0.130\ntestset: URL, BLEU: 25.3, chr-F: 0.560\ntestset: URL, BLEU: 0.4, chr-F: 0.139\ntestset: URL, BLEU: 0.6, chr-F: 0.108\ntestset: URL, BLEU: 18.1, chr-F: 0.388\ntestset: URL, BLEU: 17.2, chr-F: 0.464\ntestset: URL, BLEU: 18.0, chr-F: 0.451\ntestset: URL, BLEU: 81.0, chr-F: 0.899\ntestset: URL, BLEU: 37.6, chr-F: 0.587\ntestset: URL, BLEU: 27.7, chr-F: 0.519\ntestset: URL, BLEU: 32.6, chr-F: 0.539\ntestset: URL, BLEU: 3.8, chr-F: 0.134\ntestset: URL, BLEU: 14.3, chr-F: 0.401\ntestset: URL, BLEU: 0.5, chr-F: 0.002\ntestset: URL, BLEU: 44.0, chr-F: 0.642\ntestset: URL, BLEU: 0.7, chr-F: 0.118\ntestset: URL, BLEU: 42.7, chr-F: 0.623\ntestset: URL, BLEU: 7.2, chr-F: 0.295\ntestset: URL, BLEU: 2.7, chr-F: 0.257\ntestset: URL, BLEU: 0.2, chr-F: 0.008\ntestset: URL, BLEU: 2.9, chr-F: 0.264\ntestset: URL, BLEU: 7.4, chr-F: 0.337\ntestset: URL, BLEU: 48.5, chr-F: 0.656\ntestset: URL, BLEU: 1.8, chr-F: 0.145\ntestset: URL, BLEU: 0.7, chr-F: 0.136\ntestset: URL, BLEU: 31.1, chr-F: 0.563\ntestset: URL, BLEU: 37.0, chr-F: 0.605\ntestset: URL, BLEU: 0.2, chr-F: 0.100\ntestset: URL, BLEU: 1.0, chr-F: 0.134\ntestset: URL, BLEU: 2.3, chr-F: 0.236\ntestset: URL, BLEU: 7.8, chr-F: 0.340\ntestset: URL, BLEU: 34.3, chr-F: 0.585\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 29.6, chr-F: 0.526\ntestset: URL, BLEU: 2.4, chr-F: 0.125\ntestset: URL, BLEU: 1.6, chr-F: 0.079\ntestset: URL, BLEU: 33.6, chr-F: 0.562\ntestset: URL, BLEU: 3.4, chr-F: 0.114\ntestset: URL, BLEU: 9.2, chr-F: 0.349\ntestset: URL, BLEU: 15.6, chr-F: 0.334\ntestset: URL, BLEU: 9.1, chr-F: 0.324\ntestset: URL, BLEU: 43.4, chr-F: 0.645\ntestset: URL, BLEU: 39.0, chr-F: 0.621\ntestset: URL, BLEU: 10.8, chr-F: 0.373\ntestset: URL, BLEU: 49.9, chr-F: 0.663\ntestset: URL, BLEU: 0.7, chr-F: 0.137\ntestset: URL, BLEU: 6.4, chr-F: 0.346\ntestset: URL, BLEU: 0.5, chr-F: 0.055\ntestset: URL, BLEU: 31.4, chr-F: 0.536\ntestset: URL, BLEU: 11.1, chr-F: 0.389\ntestset: URL, BLEU: 1.3, chr-F: 0.110\ntestset: URL, BLEU: 6.8, chr-F: 0.233\ntestset: URL, BLEU: 5.8, chr-F: 0.295\ntestset: URL, BLEU: 0.8, chr-F: 0.086",
"### System Info:\n\n\n* hf\\_name: eng-ine\n* source\\_languages: eng\n* target\\_languages: ine\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: ine\n* short\\_pair: en-ine\n* chrF2\\_score: 0.539\n* bleu: 32.6\n* brevity\\_penalty: 0.973\n* ref\\_len: 68664.0\n* src\\_name: English\n* tgt\\_name: Indo-European languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: ine\n* prefer\\_old: False\n* long\\_pair: eng-ine\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
178,
4337,
1458
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ca #es #os #ro #fy #cy #sc #is #yi #lb #an #sq #fr #ht #rm #ps #af #uk #sl #lt #bg #be #gd #si #br #mk #or #mr #ru #fo #co #oc #pl #gl #nb #bn #id #hy #da #gv #nl #pt #hi #as #kw #ga #sv #gu #wa #lv #el #it #hr #ur #nn #de #cs #ine #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-ine\n\n\n* source group: English\n* target group: Indo-European languages\n* OPUS readme: eng-ine\n* model: transformer\n* source language(s): eng\n* target language(s): afr aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.2, chr-F: 0.317\ntestset: URL, BLEU: 22.1, chr-F: 0.525\ntestset: URL, BLEU: 17.4, chr-F: 0.486\ntestset: URL, BLEU: 6.5, chr-F: 0.303\ntestset: URL, BLEU: 14.9, chr-F: 0.476\ntestset: URL, BLEU: 26.4, chr-F: 0.547\ntestset: URL, BLEU: 30.0, chr-F: 0.575\ntestset: URL, BLEU: 14.7, chr-F: 0.442\ntestset: URL, BLEU: 16.7, chr-F: 0.487\ntestset: URL, BLEU: 24.8, chr-F: 0.547\ntestset: URL, BLEU: 25.2, chr-F: 0.562\ntestset: URL, BLEU: 27.0, chr-F: 0.554\ntestset: URL, BLEU: 13.0, chr-F: 0.417\ntestset: URL, BLEU: 17.4, chr-F: 0.480\ntestset: URL, BLEU: 22.3, chr-F: 0.519\ntestset: URL, BLEU: 24.9, chr-F: 0.532\ntestset: URL, BLEU: 13.6, chr-F: 0.432\ntestset: URL, BLEU: 16.6, chr-F: 0.482\ntestset: URL, BLEU: 23.5, chr-F: 0.535\ntestset: URL, BLEU: 25.5, chr-F: 0.561\ntestset: URL, BLEU: 26.3, chr-F: 0.551\ntestset: URL, BLEU: 14.2, chr-F: 0.436\ntestset: URL, BLEU: 18.3, chr-F: 0.492\ntestset: URL, BLEU: 25.7, chr-F: 0.550\ntestset: URL, BLEU: 30.5, chr-F: 0.578\ntestset: URL, BLEU: 15.1, chr-F: 0.439\ntestset: URL, BLEU: 17.1, chr-F: 0.478\ntestset: URL, BLEU: 28.0, chr-F: 0.569\ntestset: URL, BLEU: 31.9, chr-F: 0.580\ntestset: URL, BLEU: 13.6, chr-F: 0.418\ntestset: URL, BLEU: 17.0, chr-F: 0.475\ntestset: URL, BLEU: 26.1, chr-F: 0.553\ntestset: URL, BLEU: 21.4, chr-F: 0.506\ntestset: URL, BLEU: 31.4, chr-F: 0.577\ntestset: URL, BLEU: 15.3, chr-F: 0.438\ntestset: URL, BLEU: 20.3, chr-F: 0.501\ntestset: URL, BLEU: 26.0, chr-F: 0.540\ntestset: URL, BLEU: 16.1, chr-F: 0.449\ntestset: URL, BLEU: 28.6, chr-F: 0.555\ntestset: URL, BLEU: 9.5, chr-F: 0.344\ntestset: URL, BLEU: 14.8, chr-F: 0.440\ntestset: URL, BLEU: 22.6, chr-F: 0.523\ntestset: URL, BLEU: 18.8, chr-F: 0.483\ntestset: URL, BLEU: 16.8, chr-F: 0.457\ntestset: URL, BLEU: 26.2, chr-F: 0.555\ntestset: URL, BLEU: 21.2, chr-F: 0.510\ntestset: URL, BLEU: 17.6, chr-F: 0.471\ntestset: URL, BLEU: 13.6, chr-F: 0.421\ntestset: URL, BLEU: 21.5, chr-F: 0.516\ntestset: URL, BLEU: 13.0, chr-F: 0.452\ntestset: URL, BLEU: 18.7, chr-F: 0.486\ntestset: URL, BLEU: 13.5, chr-F: 0.425\ntestset: URL, BLEU: 29.8, chr-F: 0.581\ntestset: URL, BLEU: 16.1, chr-F: 0.472\ntestset: URL, BLEU: 14.8, chr-F: 0.435\ntestset: URL, BLEU: 26.6, chr-F: 0.554\ntestset: URL, BLEU: 6.9, chr-F: 0.313\ntestset: URL, BLEU: 10.6, chr-F: 0.429\ntestset: URL, BLEU: 17.5, chr-F: 0.452\ntestset: URL, BLEU: 52.1, chr-F: 0.708\ntestset: URL, BLEU: 5.1, chr-F: 0.131\ntestset: URL, BLEU: 1.2, chr-F: 0.099\ntestset: URL, BLEU: 2.9, chr-F: 0.259\ntestset: URL, BLEU: 14.1, chr-F: 0.408\ntestset: URL, BLEU: 0.3, chr-F: 0.002\ntestset: URL, BLEU: 18.1, chr-F: 0.450\ntestset: URL, BLEU: 13.5, chr-F: 0.432\ntestset: URL, BLEU: 0.3, chr-F: 0.003\ntestset: URL, BLEU: 10.4, chr-F: 0.318\ntestset: URL, BLEU: 38.7, chr-F: 0.592\ntestset: URL, BLEU: 42.0, chr-F: 0.633\ntestset: URL, BLEU: 32.3, chr-F: 0.546\ntestset: URL, BLEU: 0.5, chr-F: 0.079\ntestset: URL, BLEU: 3.1, chr-F: 0.148\ntestset: URL, BLEU: 1.4, chr-F: 0.216\ntestset: URL, BLEU: 22.4, chr-F: 0.470\ntestset: URL, BLEU: 49.7, chr-F: 0.671\ntestset: URL, BLEU: 31.7, chr-F: 0.554\ntestset: URL, BLEU: 1.1, chr-F: 0.139\ntestset: URL, BLEU: 0.9, chr-F: 0.089\ntestset: URL, BLEU: 42.7, chr-F: 0.640\ntestset: URL, BLEU: 3.5, chr-F: 0.259\ntestset: URL, BLEU: 6.4, chr-F: 0.235\ntestset: URL, BLEU: 6.6, chr-F: 0.285\ntestset: URL, BLEU: 5.7, chr-F: 0.257\ntestset: URL, BLEU: 38.4, chr-F: 0.595\ntestset: URL, BLEU: 0.9, chr-F: 0.149\ntestset: URL, BLEU: 8.4, chr-F: 0.145\ntestset: URL, BLEU: 16.5, chr-F: 0.411\ntestset: URL, BLEU: 0.6, chr-F: 0.098\ntestset: URL, BLEU: 11.6, chr-F: 0.361\ntestset: URL, BLEU: 32.5, chr-F: 0.546\ntestset: URL, BLEU: 38.4, chr-F: 0.602\ntestset: URL, BLEU: 23.1, chr-F: 0.418\ntestset: URL, BLEU: 0.7, chr-F: 0.137\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.0, chr-F: 0.005\ntestset: URL, BLEU: 0.9, chr-F: 0.108\ntestset: URL, BLEU: 20.8, chr-F: 0.391\ntestset: URL, BLEU: 34.0, chr-F: 0.537\ntestset: URL, BLEU: 33.7, chr-F: 0.567\ntestset: URL, BLEU: 2.8, chr-F: 0.269\ntestset: URL, BLEU: 15.6, chr-F: 0.437\ntestset: URL, BLEU: 5.4, chr-F: 0.320\ntestset: URL, BLEU: 17.4, chr-F: 0.426\ntestset: URL, BLEU: 17.4, chr-F: 0.436\ntestset: URL, BLEU: 40.4, chr-F: 0.636\ntestset: URL, BLEU: 6.4, chr-F: 0.008\ntestset: URL, BLEU: 6.6, chr-F: 0.005\ntestset: URL, BLEU: 0.8, chr-F: 0.123\ntestset: URL, BLEU: 10.2, chr-F: 0.209\ntestset: URL, BLEU: 0.8, chr-F: 0.163\ntestset: URL, BLEU: 0.2, chr-F: 0.001\ntestset: URL, BLEU: 9.4, chr-F: 0.372\ntestset: URL, BLEU: 30.3, chr-F: 0.559\ntestset: URL, BLEU: 1.0, chr-F: 0.130\ntestset: URL, BLEU: 25.3, chr-F: 0.560\ntestset: URL, BLEU: 0.4, chr-F: 0.139\ntestset: URL, BLEU: 0.6, chr-F: 0.108\ntestset: URL, BLEU: 18.1, chr-F: 0.388\ntestset: URL, BLEU: 17.2, chr-F: 0.464\ntestset: URL, BLEU: 18.0, chr-F: 0.451\ntestset: URL, BLEU: 81.0, chr-F: 0.899\ntestset: URL, BLEU: 37.6, chr-F: 0.587\ntestset: URL, BLEU: 27.7, chr-F: 0.519\ntestset: URL, BLEU: 32.6, chr-F: 0.539\ntestset: URL, BLEU: 3.8, chr-F: 0.134\ntestset: URL, BLEU: 14.3, chr-F: 0.401\ntestset: URL, BLEU: 0.5, chr-F: 0.002\ntestset: URL, BLEU: 44.0, chr-F: 0.642\ntestset: URL, BLEU: 0.7, chr-F: 0.118\ntestset: URL, BLEU: 42.7, chr-F: 0.623\ntestset: URL, BLEU: 7.2, chr-F: 0.295\ntestset: URL, BLEU: 2.7, chr-F: 0.257\ntestset: URL, BLEU: 0.2, chr-F: 0.008\ntestset: URL, BLEU: 2.9, chr-F: 0.264\ntestset: URL, BLEU: 7.4, chr-F: 0.337\ntestset: URL, BLEU: 48.5, chr-F: 0.656\ntestset: URL, BLEU: 1.8, chr-F: 0.145\ntestset: URL, BLEU: 0.7, chr-F: 0.136\ntestset: URL, BLEU: 31.1, chr-F: 0.563\ntestset: URL, BLEU: 37.0, chr-F: 0.605\ntestset: URL, BLEU: 0.2, chr-F: 0.100\ntestset: URL, BLEU: 1.0, chr-F: 0.134\ntestset: URL, BLEU: 2.3, chr-F: 0.236\ntestset: URL, BLEU: 7.8, chr-F: 0.340\ntestset: URL, BLEU: 34.3, chr-F: 0.585\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 29.6, chr-F: 0.526\ntestset: URL, BLEU: 2.4, chr-F: 0.125\ntestset: URL, BLEU: 1.6, chr-F: 0.079\ntestset: URL, BLEU: 33.6, chr-F: 0.562\ntestset: URL, BLEU: 3.4, chr-F: 0.114\ntestset: URL, BLEU: 9.2, chr-F: 0.349\ntestset: URL, BLEU: 15.6, chr-F: 0.334\ntestset: URL, BLEU: 9.1, chr-F: 0.324\ntestset: URL, BLEU: 43.4, chr-F: 0.645\ntestset: URL, BLEU: 39.0, chr-F: 0.621\ntestset: URL, BLEU: 10.8, chr-F: 0.373\ntestset: URL, BLEU: 49.9, chr-F: 0.663\ntestset: URL, BLEU: 0.7, chr-F: 0.137\ntestset: URL, BLEU: 6.4, chr-F: 0.346\ntestset: URL, BLEU: 0.5, chr-F: 0.055\ntestset: URL, BLEU: 31.4, chr-F: 0.536\ntestset: URL, BLEU: 11.1, chr-F: 0.389\ntestset: URL, BLEU: 1.3, chr-F: 0.110\ntestset: URL, BLEU: 6.8, chr-F: 0.233\ntestset: URL, BLEU: 5.8, chr-F: 0.295\ntestset: URL, BLEU: 0.8, chr-F: 0.086### System Info:\n\n\n* hf\\_name: eng-ine\n* source\\_languages: eng\n* target\\_languages: ine\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: ine\n* short\\_pair: en-ine\n* chrF2\\_score: 0.539\n* bleu: 32.6\n* brevity\\_penalty: 0.973\n* ref\\_len: 68664.0\n* src\\_name: English\n* tgt\\_name: Indo-European languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: ine\n* prefer\\_old: False\n* long\\_pair: eng-ine\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-is
* source languages: en
* target languages: is
* OPUS readme: [en-is](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-is/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-is/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.is | 25.3 | 0.518 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-is | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"is",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #is #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-is
* source languages: en
* target languages: is
* OPUS readme: en-is
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.3, chr-F: 0.518
| [
"### opus-mt-en-is\n\n\n* source languages: en\n* target languages: is\n* OPUS readme: en-is\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.518"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #is #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-is\n\n\n* source languages: en\n* target languages: is\n* OPUS readme: en-is\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.518"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #is #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-is\n\n\n* source languages: en\n* target languages: is\n* OPUS readme: en-is\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.518"
] |
translation | transformers |
### opus-mt-en-iso
* source languages: en
* target languages: iso
* OPUS readme: [en-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-iso/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.iso | 35.7 | 0.523 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-iso | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"iso",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #iso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-iso
* source languages: en
* target languages: iso
* OPUS readme: en-iso
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.7, chr-F: 0.523
| [
"### opus-mt-en-iso\n\n\n* source languages: en\n* target languages: iso\n* OPUS readme: en-iso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.7, chr-F: 0.523"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #iso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-iso\n\n\n* source languages: en\n* target languages: iso\n* OPUS readme: en-iso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.7, chr-F: 0.523"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #iso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-iso\n\n\n* source languages: en\n* target languages: iso\n* OPUS readme: en-iso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.7, chr-F: 0.523"
] |
translation | transformers |
### opus-mt-en-it
* source languages: en
* target languages: it
* OPUS readme: [en-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-it/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.zip)
* test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.test.txt)
* test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-it/opus-2019-12-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.en.it | 30.9 | 0.606 |
| newstest2009.en.it | 31.9 | 0.604 |
| Tatoeba.en.it | 48.2 | 0.695 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-it
* source languages: en
* target languages: it
* OPUS readme: en-it
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.9, chr-F: 0.606
testset: URL, BLEU: 31.9, chr-F: 0.604
testset: URL, BLEU: 48.2, chr-F: 0.695
| [
"### opus-mt-en-it\n\n\n* source languages: en\n* target languages: it\n* OPUS readme: en-it\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.9, chr-F: 0.606\ntestset: URL, BLEU: 31.9, chr-F: 0.604\ntestset: URL, BLEU: 48.2, chr-F: 0.695"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-it\n\n\n* source languages: en\n* target languages: it\n* OPUS readme: en-it\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.9, chr-F: 0.606\ntestset: URL, BLEU: 31.9, chr-F: 0.604\ntestset: URL, BLEU: 48.2, chr-F: 0.695"
] | [
51,
150
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-it\n\n\n* source languages: en\n* target languages: it\n* OPUS readme: en-it\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.9, chr-F: 0.606\ntestset: URL, BLEU: 31.9, chr-F: 0.604\ntestset: URL, BLEU: 48.2, chr-F: 0.695"
] |
translation | transformers |
### eng-itc
* source group: English
* target group: Italic languages
* OPUS readme: [eng-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md)
* model: transformer
* source language(s): eng
* target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-engron.eng.ron | 27.1 | 0.565 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 29.9 | 0.574 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 35.3 | 0.609 |
| newssyscomb2009-engfra.eng.fra | 27.7 | 0.567 |
| newssyscomb2009-engita.eng.ita | 28.6 | 0.586 |
| newssyscomb2009-engspa.eng.spa | 29.8 | 0.569 |
| news-test2008-engfra.eng.fra | 25.0 | 0.536 |
| news-test2008-engspa.eng.spa | 27.1 | 0.548 |
| newstest2009-engfra.eng.fra | 26.7 | 0.557 |
| newstest2009-engita.eng.ita | 28.9 | 0.583 |
| newstest2009-engspa.eng.spa | 28.9 | 0.567 |
| newstest2010-engfra.eng.fra | 29.6 | 0.574 |
| newstest2010-engspa.eng.spa | 33.8 | 0.598 |
| newstest2011-engfra.eng.fra | 30.9 | 0.590 |
| newstest2011-engspa.eng.spa | 34.8 | 0.598 |
| newstest2012-engfra.eng.fra | 29.1 | 0.574 |
| newstest2012-engspa.eng.spa | 34.9 | 0.600 |
| newstest2013-engfra.eng.fra | 30.1 | 0.567 |
| newstest2013-engspa.eng.spa | 31.8 | 0.576 |
| newstest2016-enro-engron.eng.ron | 25.9 | 0.548 |
| Tatoeba-test.eng-arg.eng.arg | 1.6 | 0.120 |
| Tatoeba-test.eng-ast.eng.ast | 17.2 | 0.389 |
| Tatoeba-test.eng-cat.eng.cat | 47.6 | 0.668 |
| Tatoeba-test.eng-cos.eng.cos | 4.3 | 0.287 |
| Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.101 |
| Tatoeba-test.eng-ext.eng.ext | 8.7 | 0.287 |
| Tatoeba-test.eng-fra.eng.fra | 44.9 | 0.635 |
| Tatoeba-test.eng-frm.eng.frm | 1.0 | 0.225 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.7 | 0.115 |
| Tatoeba-test.eng-glg.eng.glg | 44.9 | 0.648 |
| Tatoeba-test.eng-hat.eng.hat | 30.9 | 0.533 |
| Tatoeba-test.eng-ita.eng.ita | 45.4 | 0.673 |
| Tatoeba-test.eng-lad.eng.lad | 5.6 | 0.279 |
| Tatoeba-test.eng-lat.eng.lat | 12.1 | 0.380 |
| Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.183 |
| Tatoeba-test.eng-lld.eng.lld | 0.5 | 0.199 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.187 |
| Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.909 |
| Tatoeba-test.eng-msa.eng.msa | 31.3 | 0.549 |
| Tatoeba-test.eng.multi | 38.0 | 0.588 |
| Tatoeba-test.eng-mwl.eng.mwl | 2.7 | 0.322 |
| Tatoeba-test.eng-oci.eng.oci | 8.2 | 0.293 |
| Tatoeba-test.eng-pap.eng.pap | 46.7 | 0.663 |
| Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.194 |
| Tatoeba-test.eng-por.eng.por | 41.2 | 0.635 |
| Tatoeba-test.eng-roh.eng.roh | 2.6 | 0.237 |
| Tatoeba-test.eng-ron.eng.ron | 40.6 | 0.632 |
| Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.181 |
| Tatoeba-test.eng-spa.eng.spa | 49.5 | 0.685 |
| Tatoeba-test.eng-vec.eng.vec | 1.6 | 0.223 |
| Tatoeba-test.eng-wln.eng.wln | 7.1 | 0.250 |
### System Info:
- hf_name: eng-itc
- source_languages: eng
- target_languages: itc
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']
- src_constituents: {'eng'}
- tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: itc
- short_pair: en-itc
- chrF2_score: 0.588
- bleu: 38.0
- brevity_penalty: 0.9670000000000001
- ref_len: 73951.0
- src_name: English
- tgt_name: Italic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: itc
- prefer_old: False
- long_pair: eng-itc
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-itc | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"sc",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"itc",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"sc",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"itc"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #it #ca #rm #es #ro #gl #sc #co #wa #pt #oc #an #id #fr #ht #itc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-itc
* source group: English
* target group: Italic languages
* OPUS readme: eng-itc
* model: transformer
* source language(s): eng
* target language(s): arg ast cat cos egl ext fra frm\_Latn gcf\_Latn glg hat ind ita lad lad\_Latn lat\_Latn lij lld\_Latn lmo max\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\_Latn vec wln zlm\_Latn zsm\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.1, chr-F: 0.565
testset: URL, BLEU: 29.9, chr-F: 0.574
testset: URL, BLEU: 35.3, chr-F: 0.609
testset: URL, BLEU: 27.7, chr-F: 0.567
testset: URL, BLEU: 28.6, chr-F: 0.586
testset: URL, BLEU: 29.8, chr-F: 0.569
testset: URL, BLEU: 25.0, chr-F: 0.536
testset: URL, BLEU: 27.1, chr-F: 0.548
testset: URL, BLEU: 26.7, chr-F: 0.557
testset: URL, BLEU: 28.9, chr-F: 0.583
testset: URL, BLEU: 28.9, chr-F: 0.567
testset: URL, BLEU: 29.6, chr-F: 0.574
testset: URL, BLEU: 33.8, chr-F: 0.598
testset: URL, BLEU: 30.9, chr-F: 0.590
testset: URL, BLEU: 34.8, chr-F: 0.598
testset: URL, BLEU: 29.1, chr-F: 0.574
testset: URL, BLEU: 34.9, chr-F: 0.600
testset: URL, BLEU: 30.1, chr-F: 0.567
testset: URL, BLEU: 31.8, chr-F: 0.576
testset: URL, BLEU: 25.9, chr-F: 0.548
testset: URL, BLEU: 1.6, chr-F: 0.120
testset: URL, BLEU: 17.2, chr-F: 0.389
testset: URL, BLEU: 47.6, chr-F: 0.668
testset: URL, BLEU: 4.3, chr-F: 0.287
testset: URL, BLEU: 0.9, chr-F: 0.101
testset: URL, BLEU: 8.7, chr-F: 0.287
testset: URL, BLEU: 44.9, chr-F: 0.635
testset: URL, BLEU: 1.0, chr-F: 0.225
testset: URL, BLEU: 0.7, chr-F: 0.115
testset: URL, BLEU: 44.9, chr-F: 0.648
testset: URL, BLEU: 30.9, chr-F: 0.533
testset: URL, BLEU: 45.4, chr-F: 0.673
testset: URL, BLEU: 5.6, chr-F: 0.279
testset: URL, BLEU: 12.1, chr-F: 0.380
testset: URL, BLEU: 1.4, chr-F: 0.183
testset: URL, BLEU: 0.5, chr-F: 0.199
testset: URL, BLEU: 0.7, chr-F: 0.187
testset: URL, BLEU: 83.6, chr-F: 0.909
testset: URL, BLEU: 31.3, chr-F: 0.549
testset: URL, BLEU: 38.0, chr-F: 0.588
testset: URL, BLEU: 2.7, chr-F: 0.322
testset: URL, BLEU: 8.2, chr-F: 0.293
testset: URL, BLEU: 46.7, chr-F: 0.663
testset: URL, BLEU: 2.1, chr-F: 0.194
testset: URL, BLEU: 41.2, chr-F: 0.635
testset: URL, BLEU: 2.6, chr-F: 0.237
testset: URL, BLEU: 40.6, chr-F: 0.632
testset: URL, BLEU: 1.6, chr-F: 0.181
testset: URL, BLEU: 49.5, chr-F: 0.685
testset: URL, BLEU: 1.6, chr-F: 0.223
testset: URL, BLEU: 7.1, chr-F: 0.250
### System Info:
* hf\_name: eng-itc
* source\_languages: eng
* target\_languages: itc
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']
* src\_constituents: {'eng'}
* tgt\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\_Latn', 'lad\_Latn', 'pcd', 'lat\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\_Latn', 'srd', 'gcf\_Latn', 'lld\_Latn', 'min', 'tmw\_Latn', 'cos', 'wln', 'zlm\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\_Latn', 'frm\_Latn', 'scn', 'mfe'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: itc
* short\_pair: en-itc
* chrF2\_score: 0.588
* bleu: 38.0
* brevity\_penalty: 0.9670000000000001
* ref\_len: 73951.0
* src\_name: English
* tgt\_name: Italic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: itc
* prefer\_old: False
* long\_pair: eng-itc
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-itc\n\n\n* source group: English\n* target group: Italic languages\n* OPUS readme: eng-itc\n* model: transformer\n* source language(s): eng\n* target language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.1, chr-F: 0.565\ntestset: URL, BLEU: 29.9, chr-F: 0.574\ntestset: URL, BLEU: 35.3, chr-F: 0.609\ntestset: URL, BLEU: 27.7, chr-F: 0.567\ntestset: URL, BLEU: 28.6, chr-F: 0.586\ntestset: URL, BLEU: 29.8, chr-F: 0.569\ntestset: URL, BLEU: 25.0, chr-F: 0.536\ntestset: URL, BLEU: 27.1, chr-F: 0.548\ntestset: URL, BLEU: 26.7, chr-F: 0.557\ntestset: URL, BLEU: 28.9, chr-F: 0.583\ntestset: URL, BLEU: 28.9, chr-F: 0.567\ntestset: URL, BLEU: 29.6, chr-F: 0.574\ntestset: URL, BLEU: 33.8, chr-F: 0.598\ntestset: URL, BLEU: 30.9, chr-F: 0.590\ntestset: URL, BLEU: 34.8, chr-F: 0.598\ntestset: URL, BLEU: 29.1, chr-F: 0.574\ntestset: URL, BLEU: 34.9, chr-F: 0.600\ntestset: URL, BLEU: 30.1, chr-F: 0.567\ntestset: URL, BLEU: 31.8, chr-F: 0.576\ntestset: URL, BLEU: 25.9, chr-F: 0.548\ntestset: URL, BLEU: 1.6, chr-F: 0.120\ntestset: URL, BLEU: 17.2, chr-F: 0.389\ntestset: URL, BLEU: 47.6, chr-F: 0.668\ntestset: URL, BLEU: 4.3, chr-F: 0.287\ntestset: URL, BLEU: 0.9, chr-F: 0.101\ntestset: URL, BLEU: 8.7, chr-F: 0.287\ntestset: URL, BLEU: 44.9, chr-F: 0.635\ntestset: URL, BLEU: 1.0, chr-F: 0.225\ntestset: URL, BLEU: 0.7, chr-F: 0.115\ntestset: URL, BLEU: 44.9, chr-F: 0.648\ntestset: URL, BLEU: 30.9, chr-F: 0.533\ntestset: URL, BLEU: 45.4, chr-F: 0.673\ntestset: URL, BLEU: 5.6, chr-F: 0.279\ntestset: URL, BLEU: 12.1, chr-F: 0.380\ntestset: URL, BLEU: 1.4, chr-F: 0.183\ntestset: URL, BLEU: 0.5, chr-F: 0.199\ntestset: URL, BLEU: 0.7, chr-F: 0.187\ntestset: URL, BLEU: 83.6, chr-F: 0.909\ntestset: URL, BLEU: 31.3, chr-F: 0.549\ntestset: URL, BLEU: 38.0, chr-F: 0.588\ntestset: URL, BLEU: 2.7, chr-F: 0.322\ntestset: URL, BLEU: 8.2, chr-F: 0.293\ntestset: URL, BLEU: 46.7, chr-F: 0.663\ntestset: URL, BLEU: 2.1, chr-F: 0.194\ntestset: URL, BLEU: 41.2, chr-F: 0.635\ntestset: URL, BLEU: 2.6, chr-F: 0.237\ntestset: URL, BLEU: 40.6, chr-F: 0.632\ntestset: URL, BLEU: 1.6, chr-F: 0.181\ntestset: URL, BLEU: 49.5, chr-F: 0.685\ntestset: URL, BLEU: 1.6, chr-F: 0.223\ntestset: URL, BLEU: 7.1, chr-F: 0.250",
"### System Info:\n\n\n* hf\\_name: eng-itc\n* source\\_languages: eng\n* target\\_languages: itc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: itc\n* short\\_pair: en-itc\n* chrF2\\_score: 0.588\n* bleu: 38.0\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 73951.0\n* src\\_name: English\n* tgt\\_name: Italic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: itc\n* prefer\\_old: False\n* long\\_pair: eng-itc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #it #ca #rm #es #ro #gl #sc #co #wa #pt #oc #an #id #fr #ht #itc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-itc\n\n\n* source group: English\n* target group: Italic languages\n* OPUS readme: eng-itc\n* model: transformer\n* source language(s): eng\n* target language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.1, chr-F: 0.565\ntestset: URL, BLEU: 29.9, chr-F: 0.574\ntestset: URL, BLEU: 35.3, chr-F: 0.609\ntestset: URL, BLEU: 27.7, chr-F: 0.567\ntestset: URL, BLEU: 28.6, chr-F: 0.586\ntestset: URL, BLEU: 29.8, chr-F: 0.569\ntestset: URL, BLEU: 25.0, chr-F: 0.536\ntestset: URL, BLEU: 27.1, chr-F: 0.548\ntestset: URL, BLEU: 26.7, chr-F: 0.557\ntestset: URL, BLEU: 28.9, chr-F: 0.583\ntestset: URL, BLEU: 28.9, chr-F: 0.567\ntestset: URL, BLEU: 29.6, chr-F: 0.574\ntestset: URL, BLEU: 33.8, chr-F: 0.598\ntestset: URL, BLEU: 30.9, chr-F: 0.590\ntestset: URL, BLEU: 34.8, chr-F: 0.598\ntestset: URL, BLEU: 29.1, chr-F: 0.574\ntestset: URL, BLEU: 34.9, chr-F: 0.600\ntestset: URL, BLEU: 30.1, chr-F: 0.567\ntestset: URL, BLEU: 31.8, chr-F: 0.576\ntestset: URL, BLEU: 25.9, chr-F: 0.548\ntestset: URL, BLEU: 1.6, chr-F: 0.120\ntestset: URL, BLEU: 17.2, chr-F: 0.389\ntestset: URL, BLEU: 47.6, chr-F: 0.668\ntestset: URL, BLEU: 4.3, chr-F: 0.287\ntestset: URL, BLEU: 0.9, chr-F: 0.101\ntestset: URL, BLEU: 8.7, chr-F: 0.287\ntestset: URL, BLEU: 44.9, chr-F: 0.635\ntestset: URL, BLEU: 1.0, chr-F: 0.225\ntestset: URL, BLEU: 0.7, chr-F: 0.115\ntestset: URL, BLEU: 44.9, chr-F: 0.648\ntestset: URL, BLEU: 30.9, chr-F: 0.533\ntestset: URL, BLEU: 45.4, chr-F: 0.673\ntestset: URL, BLEU: 5.6, chr-F: 0.279\ntestset: URL, BLEU: 12.1, chr-F: 0.380\ntestset: URL, BLEU: 1.4, chr-F: 0.183\ntestset: URL, BLEU: 0.5, chr-F: 0.199\ntestset: URL, BLEU: 0.7, chr-F: 0.187\ntestset: URL, BLEU: 83.6, chr-F: 0.909\ntestset: URL, BLEU: 31.3, chr-F: 0.549\ntestset: URL, BLEU: 38.0, chr-F: 0.588\ntestset: URL, BLEU: 2.7, chr-F: 0.322\ntestset: URL, BLEU: 8.2, chr-F: 0.293\ntestset: URL, BLEU: 46.7, chr-F: 0.663\ntestset: URL, BLEU: 2.1, chr-F: 0.194\ntestset: URL, BLEU: 41.2, chr-F: 0.635\ntestset: URL, BLEU: 2.6, chr-F: 0.237\ntestset: URL, BLEU: 40.6, chr-F: 0.632\ntestset: URL, BLEU: 1.6, chr-F: 0.181\ntestset: URL, BLEU: 49.5, chr-F: 0.685\ntestset: URL, BLEU: 1.6, chr-F: 0.223\ntestset: URL, BLEU: 7.1, chr-F: 0.250",
"### System Info:\n\n\n* hf\\_name: eng-itc\n* source\\_languages: eng\n* target\\_languages: itc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: itc\n* short\\_pair: en-itc\n* chrF2\\_score: 0.588\n* bleu: 38.0\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 73951.0\n* src\\_name: English\n* tgt\\_name: Italic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: itc\n* prefer\\_old: False\n* long\\_pair: eng-itc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
85,
1396,
707
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #it #ca #rm #es #ro #gl #sc #co #wa #pt #oc #an #id #fr #ht #itc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-itc\n\n\n* source group: English\n* target group: Italic languages\n* OPUS readme: eng-itc\n* model: transformer\n* source language(s): eng\n* target language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.1, chr-F: 0.565\ntestset: URL, BLEU: 29.9, chr-F: 0.574\ntestset: URL, BLEU: 35.3, chr-F: 0.609\ntestset: URL, BLEU: 27.7, chr-F: 0.567\ntestset: URL, BLEU: 28.6, chr-F: 0.586\ntestset: URL, BLEU: 29.8, chr-F: 0.569\ntestset: URL, BLEU: 25.0, chr-F: 0.536\ntestset: URL, BLEU: 27.1, chr-F: 0.548\ntestset: URL, BLEU: 26.7, chr-F: 0.557\ntestset: URL, BLEU: 28.9, chr-F: 0.583\ntestset: URL, BLEU: 28.9, chr-F: 0.567\ntestset: URL, BLEU: 29.6, chr-F: 0.574\ntestset: URL, BLEU: 33.8, chr-F: 0.598\ntestset: URL, BLEU: 30.9, chr-F: 0.590\ntestset: URL, BLEU: 34.8, chr-F: 0.598\ntestset: URL, BLEU: 29.1, chr-F: 0.574\ntestset: URL, BLEU: 34.9, chr-F: 0.600\ntestset: URL, BLEU: 30.1, chr-F: 0.567\ntestset: URL, BLEU: 31.8, chr-F: 0.576\ntestset: URL, BLEU: 25.9, chr-F: 0.548\ntestset: URL, BLEU: 1.6, chr-F: 0.120\ntestset: URL, BLEU: 17.2, chr-F: 0.389\ntestset: URL, BLEU: 47.6, chr-F: 0.668\ntestset: URL, BLEU: 4.3, chr-F: 0.287\ntestset: URL, BLEU: 0.9, chr-F: 0.101\ntestset: URL, BLEU: 8.7, chr-F: 0.287\ntestset: URL, BLEU: 44.9, chr-F: 0.635\ntestset: URL, BLEU: 1.0, chr-F: 0.225\ntestset: URL, BLEU: 0.7, chr-F: 0.115\ntestset: URL, BLEU: 44.9, chr-F: 0.648\ntestset: URL, BLEU: 30.9, chr-F: 0.533\ntestset: URL, BLEU: 45.4, chr-F: 0.673\ntestset: URL, BLEU: 5.6, chr-F: 0.279\ntestset: URL, BLEU: 12.1, chr-F: 0.380\ntestset: URL, BLEU: 1.4, chr-F: 0.183\ntestset: URL, BLEU: 0.5, chr-F: 0.199\ntestset: URL, BLEU: 0.7, chr-F: 0.187\ntestset: URL, BLEU: 83.6, chr-F: 0.909\ntestset: URL, BLEU: 31.3, chr-F: 0.549\ntestset: URL, BLEU: 38.0, chr-F: 0.588\ntestset: URL, BLEU: 2.7, chr-F: 0.322\ntestset: URL, BLEU: 8.2, chr-F: 0.293\ntestset: URL, BLEU: 46.7, chr-F: 0.663\ntestset: URL, BLEU: 2.1, chr-F: 0.194\ntestset: URL, BLEU: 41.2, chr-F: 0.635\ntestset: URL, BLEU: 2.6, chr-F: 0.237\ntestset: URL, BLEU: 40.6, chr-F: 0.632\ntestset: URL, BLEU: 1.6, chr-F: 0.181\ntestset: URL, BLEU: 49.5, chr-F: 0.685\ntestset: URL, BLEU: 1.6, chr-F: 0.223\ntestset: URL, BLEU: 7.1, chr-F: 0.250### System Info:\n\n\n* hf\\_name: eng-itc\n* source\\_languages: eng\n* target\\_languages: itc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: itc\n* short\\_pair: en-itc\n* chrF2\\_score: 0.588\n* bleu: 38.0\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 73951.0\n* src\\_name: English\n* tgt\\_name: Italic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: itc\n* prefer\\_old: False\n* long\\_pair: eng-itc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-jap
* source languages: en
* target languages: jap
* OPUS readme: [en-jap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-jap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-jap/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.en.jap | 42.1 | 0.960 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-jap | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"jap",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #jap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-jap
* source languages: en
* target languages: jap
* OPUS readme: en-jap
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 42.1, chr-F: 0.960
| [
"### opus-mt-en-jap\n\n\n* source languages: en\n* target languages: jap\n* OPUS readme: en-jap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.960"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #jap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-jap\n\n\n* source languages: en\n* target languages: jap\n* OPUS readme: en-jap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.960"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #jap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-jap\n\n\n* source languages: en\n* target languages: jap\n* OPUS readme: en-jap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.960"
] |
translation | transformers |
### opus-mt-en-kg
* source languages: en
* target languages: kg
* OPUS readme: [en-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kg | 39.6 | 0.613 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-kg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"kg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #kg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-kg
* source languages: en
* target languages: kg
* OPUS readme: en-kg
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.6, chr-F: 0.613
| [
"### opus-mt-en-kg\n\n\n* source languages: en\n* target languages: kg\n* OPUS readme: en-kg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.6, chr-F: 0.613"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-kg\n\n\n* source languages: en\n* target languages: kg\n* OPUS readme: en-kg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.6, chr-F: 0.613"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-kg\n\n\n* source languages: en\n* target languages: kg\n* OPUS readme: en-kg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.6, chr-F: 0.613"
] |
translation | transformers |
### opus-mt-en-kj
* source languages: en
* target languages: kj
* OPUS readme: [en-kj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kj/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kj | 29.6 | 0.539 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-kj | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"kj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #kj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-kj
* source languages: en
* target languages: kj
* OPUS readme: en-kj
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.6, chr-F: 0.539
| [
"### opus-mt-en-kj\n\n\n* source languages: en\n* target languages: kj\n* OPUS readme: en-kj\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.6, chr-F: 0.539"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-kj\n\n\n* source languages: en\n* target languages: kj\n* OPUS readme: en-kj\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.6, chr-F: 0.539"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-kj\n\n\n* source languages: en\n* target languages: kj\n* OPUS readme: en-kj\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.6, chr-F: 0.539"
] |
translation | transformers |
### opus-mt-en-kqn
* source languages: en
* target languages: kqn
* OPUS readme: [en-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kqn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kqn/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kqn | 33.1 | 0.567 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-kqn | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"kqn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #kqn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-kqn
* source languages: en
* target languages: kqn
* OPUS readme: en-kqn
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.1, chr-F: 0.567
| [
"### opus-mt-en-kqn\n\n\n* source languages: en\n* target languages: kqn\n* OPUS readme: en-kqn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.567"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kqn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-kqn\n\n\n* source languages: en\n* target languages: kqn\n* OPUS readme: en-kqn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.567"
] | [
53,
112
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kqn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-kqn\n\n\n* source languages: en\n* target languages: kqn\n* OPUS readme: en-kqn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.567"
] |
translation | transformers |
### opus-mt-en-kwn
* source languages: en
* target languages: kwn
* OPUS readme: [en-kwn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwn/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kwn | 27.6 | 0.513 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-kwn | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"kwn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #kwn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-kwn
* source languages: en
* target languages: kwn
* OPUS readme: en-kwn
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.6, chr-F: 0.513
| [
"### opus-mt-en-kwn\n\n\n* source languages: en\n* target languages: kwn\n* OPUS readme: en-kwn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.513"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kwn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-kwn\n\n\n* source languages: en\n* target languages: kwn\n* OPUS readme: en-kwn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.513"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kwn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-kwn\n\n\n* source languages: en\n* target languages: kwn\n* OPUS readme: en-kwn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.513"
] |
translation | transformers |
### opus-mt-en-kwy
* source languages: en
* target languages: kwy
* OPUS readme: [en-kwy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-kwy/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-kwy/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.kwy | 33.6 | 0.543 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-kwy | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"kwy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #kwy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-kwy
* source languages: en
* target languages: kwy
* OPUS readme: en-kwy
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.6, chr-F: 0.543
| [
"### opus-mt-en-kwy\n\n\n* source languages: en\n* target languages: kwy\n* OPUS readme: en-kwy\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.6, chr-F: 0.543"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kwy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-kwy\n\n\n* source languages: en\n* target languages: kwy\n* OPUS readme: en-kwy\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.6, chr-F: 0.543"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #kwy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-kwy\n\n\n* source languages: en\n* target languages: kwy\n* OPUS readme: en-kwy\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.6, chr-F: 0.543"
] |
translation | transformers |
### opus-mt-en-lg
* source languages: en
* target languages: lg
* OPUS readme: [en-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lg | 30.4 | 0.543 |
| Tatoeba.en.lg | 5.7 | 0.386 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-lg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #lg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-lg
* source languages: en
* target languages: lg
* OPUS readme: en-lg
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.4, chr-F: 0.543
testset: URL, BLEU: 5.7, chr-F: 0.386
| [
"### opus-mt-en-lg\n\n\n* source languages: en\n* target languages: lg\n* OPUS readme: en-lg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.543\ntestset: URL, BLEU: 5.7, chr-F: 0.386"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-lg\n\n\n* source languages: en\n* target languages: lg\n* OPUS readme: en-lg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.543\ntestset: URL, BLEU: 5.7, chr-F: 0.386"
] | [
52,
132
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-lg\n\n\n* source languages: en\n* target languages: lg\n* OPUS readme: en-lg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.543\ntestset: URL, BLEU: 5.7, chr-F: 0.386"
] |
translation | transformers |
### opus-mt-en-ln
* source languages: en
* target languages: ln
* OPUS readme: [en-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ln/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ln | 36.7 | 0.588 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ln | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ln",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ln #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ln
* source languages: en
* target languages: ln
* OPUS readme: en-ln
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.7, chr-F: 0.588
| [
"### opus-mt-en-ln\n\n\n* source languages: en\n* target languages: ln\n* OPUS readme: en-ln\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.7, chr-F: 0.588"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ln #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ln\n\n\n* source languages: en\n* target languages: ln\n* OPUS readme: en-ln\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.7, chr-F: 0.588"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ln #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ln\n\n\n* source languages: en\n* target languages: ln\n* OPUS readme: en-ln\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.7, chr-F: 0.588"
] |
translation | transformers |
### opus-mt-en-loz
* source languages: en
* target languages: loz
* OPUS readme: [en-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-loz/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.loz | 40.1 | 0.596 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-loz | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"loz",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #loz #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-loz
* source languages: en
* target languages: loz
* OPUS readme: en-loz
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.1, chr-F: 0.596
| [
"### opus-mt-en-loz\n\n\n* source languages: en\n* target languages: loz\n* OPUS readme: en-loz\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.596"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #loz #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-loz\n\n\n* source languages: en\n* target languages: loz\n* OPUS readme: en-loz\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.596"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #loz #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-loz\n\n\n* source languages: en\n* target languages: loz\n* OPUS readme: en-loz\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.596"
] |
translation | transformers |
### opus-mt-en-lu
* source languages: en
* target languages: lu
* OPUS readme: [en-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lu | 34.1 | 0.564 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-lu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #lu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-lu
* source languages: en
* target languages: lu
* OPUS readme: en-lu
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.1, chr-F: 0.564
| [
"### opus-mt-en-lu\n\n\n* source languages: en\n* target languages: lu\n* OPUS readme: en-lu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.1, chr-F: 0.564"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-lu\n\n\n* source languages: en\n* target languages: lu\n* OPUS readme: en-lu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.1, chr-F: 0.564"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-lu\n\n\n* source languages: en\n* target languages: lu\n* OPUS readme: en-lu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.1, chr-F: 0.564"
] |
translation | transformers |
### opus-mt-en-lua
* source languages: en
* target languages: lua
* OPUS readme: [en-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lua/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lua | 35.3 | 0.578 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-lua | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #lua #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-lua
* source languages: en
* target languages: lua
* OPUS readme: en-lua
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.3, chr-F: 0.578
| [
"### opus-mt-en-lua\n\n\n* source languages: en\n* target languages: lua\n* OPUS readme: en-lua\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.578"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lua #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-lua\n\n\n* source languages: en\n* target languages: lua\n* OPUS readme: en-lua\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.578"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lua #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-lua\n\n\n* source languages: en\n* target languages: lua\n* OPUS readme: en-lua\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.578"
] |
translation | transformers |
### opus-mt-en-lue
* source languages: en
* target languages: lue
* OPUS readme: [en-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lue/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lue/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lue | 30.1 | 0.558 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-lue | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #lue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-lue
* source languages: en
* target languages: lue
* OPUS readme: en-lue
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.1, chr-F: 0.558
| [
"### opus-mt-en-lue\n\n\n* source languages: en\n* target languages: lue\n* OPUS readme: en-lue\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.558"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-lue\n\n\n* source languages: en\n* target languages: lue\n* OPUS readme: en-lue\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.558"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-lue\n\n\n* source languages: en\n* target languages: lue\n* OPUS readme: en-lue\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.558"
] |
translation | transformers |
### opus-mt-en-lun
* source languages: en
* target languages: lun
* OPUS readme: [en-lun](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lun/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lun/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lun | 28.9 | 0.552 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-lun | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lun",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #lun #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-lun
* source languages: en
* target languages: lun
* OPUS readme: en-lun
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.9, chr-F: 0.552
| [
"### opus-mt-en-lun\n\n\n* source languages: en\n* target languages: lun\n* OPUS readme: en-lun\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.552"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lun #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-lun\n\n\n* source languages: en\n* target languages: lun\n* OPUS readme: en-lun\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.552"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lun #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-lun\n\n\n* source languages: en\n* target languages: lun\n* OPUS readme: en-lun\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.552"
] |
translation | transformers |
### opus-mt-en-luo
* source languages: en
* target languages: luo
* OPUS readme: [en-luo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-luo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-luo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.luo | 27.6 | 0.495 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-luo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"luo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #luo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-luo
* source languages: en
* target languages: luo
* OPUS readme: en-luo
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.6, chr-F: 0.495
| [
"### opus-mt-en-luo\n\n\n* source languages: en\n* target languages: luo\n* OPUS readme: en-luo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.495"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #luo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-luo\n\n\n* source languages: en\n* target languages: luo\n* OPUS readme: en-luo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.495"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #luo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-luo\n\n\n* source languages: en\n* target languages: luo\n* OPUS readme: en-luo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.495"
] |
translation | transformers |
### opus-mt-en-lus
* source languages: en
* target languages: lus
* OPUS readme: [en-lus](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lus/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lus/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lus/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lus/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.lus | 36.8 | 0.581 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-lus | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"lus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #lus #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-lus
* source languages: en
* target languages: lus
* OPUS readme: en-lus
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.8, chr-F: 0.581
| [
"### opus-mt-en-lus\n\n\n* source languages: en\n* target languages: lus\n* OPUS readme: en-lus\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.8, chr-F: 0.581"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lus #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-lus\n\n\n* source languages: en\n* target languages: lus\n* OPUS readme: en-lus\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.8, chr-F: 0.581"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #lus #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-lus\n\n\n* source languages: en\n* target languages: lus\n* OPUS readme: en-lus\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.8, chr-F: 0.581"
] |
translation | transformers |
### eng-map
* source group: English
* target group: Austronesian languages
* OPUS readme: [eng-map](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-map/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav_Java lkt mad mah max_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw_Latn ton tvl war zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 2.2 | 0.103 |
| Tatoeba-test.eng-ceb.eng.ceb | 10.7 | 0.425 |
| Tatoeba-test.eng-cha.eng.cha | 3.2 | 0.201 |
| Tatoeba-test.eng-dtp.eng.dtp | 0.5 | 0.120 |
| Tatoeba-test.eng-fij.eng.fij | 26.8 | 0.453 |
| Tatoeba-test.eng-gil.eng.gil | 59.3 | 0.762 |
| Tatoeba-test.eng-haw.eng.haw | 1.0 | 0.116 |
| Tatoeba-test.eng-hil.eng.hil | 19.0 | 0.517 |
| Tatoeba-test.eng-iba.eng.iba | 15.5 | 0.400 |
| Tatoeba-test.eng-ilo.eng.ilo | 33.6 | 0.591 |
| Tatoeba-test.eng-jav.eng.jav | 7.8 | 0.301 |
| Tatoeba-test.eng-lkt.eng.lkt | 1.0 | 0.064 |
| Tatoeba-test.eng-mad.eng.mad | 1.1 | 0.142 |
| Tatoeba-test.eng-mah.eng.mah | 9.1 | 0.374 |
| Tatoeba-test.eng-mlg.eng.mlg | 35.4 | 0.526 |
| Tatoeba-test.eng-mri.eng.mri | 7.6 | 0.309 |
| Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.565 |
| Tatoeba-test.eng.multi | 17.6 | 0.411 |
| Tatoeba-test.eng-nau.eng.nau | 1.4 | 0.098 |
| Tatoeba-test.eng-niu.eng.niu | 40.1 | 0.560 |
| Tatoeba-test.eng-pag.eng.pag | 16.8 | 0.526 |
| Tatoeba-test.eng-pau.eng.pau | 1.9 | 0.139 |
| Tatoeba-test.eng-rap.eng.rap | 2.7 | 0.090 |
| Tatoeba-test.eng-smo.eng.smo | 24.9 | 0.453 |
| Tatoeba-test.eng-sun.eng.sun | 33.2 | 0.439 |
| Tatoeba-test.eng-tah.eng.tah | 12.5 | 0.278 |
| Tatoeba-test.eng-tet.eng.tet | 1.6 | 0.140 |
| Tatoeba-test.eng-ton.eng.ton | 25.8 | 0.530 |
| Tatoeba-test.eng-tvl.eng.tvl | 31.1 | 0.523 |
| Tatoeba-test.eng-war.eng.war | 12.8 | 0.436 |
### System Info:
- hf_name: eng-map
- source_languages: eng
- target_languages: map
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-map/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'map']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-map/opus-2020-07-27.test.txt
- src_alpha3: eng
- tgt_alpha3: map
- short_pair: en-map
- chrF2_score: 0.41100000000000003
- bleu: 17.6
- brevity_penalty: 1.0
- ref_len: 66963.0
- src_name: English
- tgt_name: Austronesian languages
- train_date: 2020-07-27
- src_alpha2: en
- tgt_alpha2: map
- prefer_old: False
- long_pair: eng-map
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "map"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-map | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"map",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"map"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #map #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-map
* source group: English
* target group: Austronesian languages
* OPUS readme: eng-map
* model: transformer
* source language(s): eng
* target language(s): akl\_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav\_Java lkt mad mah max\_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw\_Latn ton tvl war zlm\_Latn zsm\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 2.2, chr-F: 0.103
testset: URL, BLEU: 10.7, chr-F: 0.425
testset: URL, BLEU: 3.2, chr-F: 0.201
testset: URL, BLEU: 0.5, chr-F: 0.120
testset: URL, BLEU: 26.8, chr-F: 0.453
testset: URL, BLEU: 59.3, chr-F: 0.762
testset: URL, BLEU: 1.0, chr-F: 0.116
testset: URL, BLEU: 19.0, chr-F: 0.517
testset: URL, BLEU: 15.5, chr-F: 0.400
testset: URL, BLEU: 33.6, chr-F: 0.591
testset: URL, BLEU: 7.8, chr-F: 0.301
testset: URL, BLEU: 1.0, chr-F: 0.064
testset: URL, BLEU: 1.1, chr-F: 0.142
testset: URL, BLEU: 9.1, chr-F: 0.374
testset: URL, BLEU: 35.4, chr-F: 0.526
testset: URL, BLEU: 7.6, chr-F: 0.309
testset: URL, BLEU: 31.1, chr-F: 0.565
testset: URL, BLEU: 17.6, chr-F: 0.411
testset: URL, BLEU: 1.4, chr-F: 0.098
testset: URL, BLEU: 40.1, chr-F: 0.560
testset: URL, BLEU: 16.8, chr-F: 0.526
testset: URL, BLEU: 1.9, chr-F: 0.139
testset: URL, BLEU: 2.7, chr-F: 0.090
testset: URL, BLEU: 24.9, chr-F: 0.453
testset: URL, BLEU: 33.2, chr-F: 0.439
testset: URL, BLEU: 12.5, chr-F: 0.278
testset: URL, BLEU: 1.6, chr-F: 0.140
testset: URL, BLEU: 25.8, chr-F: 0.530
testset: URL, BLEU: 31.1, chr-F: 0.523
testset: URL, BLEU: 12.8, chr-F: 0.436
### System Info:
* hf\_name: eng-map
* source\_languages: eng
* target\_languages: map
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'map']
* src\_constituents: {'eng'}
* tgt\_constituents: set()
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: map
* short\_pair: en-map
* chrF2\_score: 0.41100000000000003
* bleu: 17.6
* brevity\_penalty: 1.0
* ref\_len: 66963.0
* src\_name: English
* tgt\_name: Austronesian languages
* train\_date: 2020-07-27
* src\_alpha2: en
* tgt\_alpha2: map
* prefer\_old: False
* long\_pair: eng-map
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-map\n\n\n* source group: English\n* target group: Austronesian languages\n* OPUS readme: eng-map\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav\\_Java lkt mad mah max\\_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw\\_Latn ton tvl war zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.2, chr-F: 0.103\ntestset: URL, BLEU: 10.7, chr-F: 0.425\ntestset: URL, BLEU: 3.2, chr-F: 0.201\ntestset: URL, BLEU: 0.5, chr-F: 0.120\ntestset: URL, BLEU: 26.8, chr-F: 0.453\ntestset: URL, BLEU: 59.3, chr-F: 0.762\ntestset: URL, BLEU: 1.0, chr-F: 0.116\ntestset: URL, BLEU: 19.0, chr-F: 0.517\ntestset: URL, BLEU: 15.5, chr-F: 0.400\ntestset: URL, BLEU: 33.6, chr-F: 0.591\ntestset: URL, BLEU: 7.8, chr-F: 0.301\ntestset: URL, BLEU: 1.0, chr-F: 0.064\ntestset: URL, BLEU: 1.1, chr-F: 0.142\ntestset: URL, BLEU: 9.1, chr-F: 0.374\ntestset: URL, BLEU: 35.4, chr-F: 0.526\ntestset: URL, BLEU: 7.6, chr-F: 0.309\ntestset: URL, BLEU: 31.1, chr-F: 0.565\ntestset: URL, BLEU: 17.6, chr-F: 0.411\ntestset: URL, BLEU: 1.4, chr-F: 0.098\ntestset: URL, BLEU: 40.1, chr-F: 0.560\ntestset: URL, BLEU: 16.8, chr-F: 0.526\ntestset: URL, BLEU: 1.9, chr-F: 0.139\ntestset: URL, BLEU: 2.7, chr-F: 0.090\ntestset: URL, BLEU: 24.9, chr-F: 0.453\ntestset: URL, BLEU: 33.2, chr-F: 0.439\ntestset: URL, BLEU: 12.5, chr-F: 0.278\ntestset: URL, BLEU: 1.6, chr-F: 0.140\ntestset: URL, BLEU: 25.8, chr-F: 0.530\ntestset: URL, BLEU: 31.1, chr-F: 0.523\ntestset: URL, BLEU: 12.8, chr-F: 0.436",
"### System Info:\n\n\n* hf\\_name: eng-map\n* source\\_languages: eng\n* target\\_languages: map\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'map']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: map\n* short\\_pair: en-map\n* chrF2\\_score: 0.41100000000000003\n* bleu: 17.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 66963.0\n* src\\_name: English\n* tgt\\_name: Austronesian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: map\n* prefer\\_old: False\n* long\\_pair: eng-map\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #map #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-map\n\n\n* source group: English\n* target group: Austronesian languages\n* OPUS readme: eng-map\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav\\_Java lkt mad mah max\\_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw\\_Latn ton tvl war zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.2, chr-F: 0.103\ntestset: URL, BLEU: 10.7, chr-F: 0.425\ntestset: URL, BLEU: 3.2, chr-F: 0.201\ntestset: URL, BLEU: 0.5, chr-F: 0.120\ntestset: URL, BLEU: 26.8, chr-F: 0.453\ntestset: URL, BLEU: 59.3, chr-F: 0.762\ntestset: URL, BLEU: 1.0, chr-F: 0.116\ntestset: URL, BLEU: 19.0, chr-F: 0.517\ntestset: URL, BLEU: 15.5, chr-F: 0.400\ntestset: URL, BLEU: 33.6, chr-F: 0.591\ntestset: URL, BLEU: 7.8, chr-F: 0.301\ntestset: URL, BLEU: 1.0, chr-F: 0.064\ntestset: URL, BLEU: 1.1, chr-F: 0.142\ntestset: URL, BLEU: 9.1, chr-F: 0.374\ntestset: URL, BLEU: 35.4, chr-F: 0.526\ntestset: URL, BLEU: 7.6, chr-F: 0.309\ntestset: URL, BLEU: 31.1, chr-F: 0.565\ntestset: URL, BLEU: 17.6, chr-F: 0.411\ntestset: URL, BLEU: 1.4, chr-F: 0.098\ntestset: URL, BLEU: 40.1, chr-F: 0.560\ntestset: URL, BLEU: 16.8, chr-F: 0.526\ntestset: URL, BLEU: 1.9, chr-F: 0.139\ntestset: URL, BLEU: 2.7, chr-F: 0.090\ntestset: URL, BLEU: 24.9, chr-F: 0.453\ntestset: URL, BLEU: 33.2, chr-F: 0.439\ntestset: URL, BLEU: 12.5, chr-F: 0.278\ntestset: URL, BLEU: 1.6, chr-F: 0.140\ntestset: URL, BLEU: 25.8, chr-F: 0.530\ntestset: URL, BLEU: 31.1, chr-F: 0.523\ntestset: URL, BLEU: 12.8, chr-F: 0.436",
"### System Info:\n\n\n* hf\\_name: eng-map\n* source\\_languages: eng\n* target\\_languages: map\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'map']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: map\n* short\\_pair: en-map\n* chrF2\\_score: 0.41100000000000003\n* bleu: 17.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 66963.0\n* src\\_name: English\n* tgt\\_name: Austronesian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: map\n* prefer\\_old: False\n* long\\_pair: eng-map\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
894,
399
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #map #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-map\n\n\n* source group: English\n* target group: Austronesian languages\n* OPUS readme: eng-map\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav\\_Java lkt mad mah max\\_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw\\_Latn ton tvl war zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.2, chr-F: 0.103\ntestset: URL, BLEU: 10.7, chr-F: 0.425\ntestset: URL, BLEU: 3.2, chr-F: 0.201\ntestset: URL, BLEU: 0.5, chr-F: 0.120\ntestset: URL, BLEU: 26.8, chr-F: 0.453\ntestset: URL, BLEU: 59.3, chr-F: 0.762\ntestset: URL, BLEU: 1.0, chr-F: 0.116\ntestset: URL, BLEU: 19.0, chr-F: 0.517\ntestset: URL, BLEU: 15.5, chr-F: 0.400\ntestset: URL, BLEU: 33.6, chr-F: 0.591\ntestset: URL, BLEU: 7.8, chr-F: 0.301\ntestset: URL, BLEU: 1.0, chr-F: 0.064\ntestset: URL, BLEU: 1.1, chr-F: 0.142\ntestset: URL, BLEU: 9.1, chr-F: 0.374\ntestset: URL, BLEU: 35.4, chr-F: 0.526\ntestset: URL, BLEU: 7.6, chr-F: 0.309\ntestset: URL, BLEU: 31.1, chr-F: 0.565\ntestset: URL, BLEU: 17.6, chr-F: 0.411\ntestset: URL, BLEU: 1.4, chr-F: 0.098\ntestset: URL, BLEU: 40.1, chr-F: 0.560\ntestset: URL, BLEU: 16.8, chr-F: 0.526\ntestset: URL, BLEU: 1.9, chr-F: 0.139\ntestset: URL, BLEU: 2.7, chr-F: 0.090\ntestset: URL, BLEU: 24.9, chr-F: 0.453\ntestset: URL, BLEU: 33.2, chr-F: 0.439\ntestset: URL, BLEU: 12.5, chr-F: 0.278\ntestset: URL, BLEU: 1.6, chr-F: 0.140\ntestset: URL, BLEU: 25.8, chr-F: 0.530\ntestset: URL, BLEU: 31.1, chr-F: 0.523\ntestset: URL, BLEU: 12.8, chr-F: 0.436### System Info:\n\n\n* hf\\_name: eng-map\n* source\\_languages: eng\n* target\\_languages: map\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'map']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: map\n* short\\_pair: en-map\n* chrF2\\_score: 0.41100000000000003\n* bleu: 17.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 66963.0\n* src\\_name: English\n* tgt\\_name: Austronesian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: map\n* prefer\\_old: False\n* long\\_pair: eng-map\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-mfe
* source languages: en
* target languages: mfe
* OPUS readme: [en-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mfe/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mfe/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.mfe | 32.1 | 0.509 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-mfe | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mfe",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #mfe #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-mfe
* source languages: en
* target languages: mfe
* OPUS readme: en-mfe
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.1, chr-F: 0.509
| [
"### opus-mt-en-mfe\n\n\n* source languages: en\n* target languages: mfe\n* OPUS readme: en-mfe\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.1, chr-F: 0.509"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mfe #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-mfe\n\n\n* source languages: en\n* target languages: mfe\n* OPUS readme: en-mfe\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.1, chr-F: 0.509"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mfe #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-mfe\n\n\n* source languages: en\n* target languages: mfe\n* OPUS readme: en-mfe\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.1, chr-F: 0.509"
] |
translation | transformers |
### opus-mt-en-mg
* source languages: en
* target languages: mg
* OPUS readme: [en-mg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.en.mg | 22.3 | 0.565 |
| Tatoeba.en.mg | 35.5 | 0.548 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-mg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #mg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-mg
* source languages: en
* target languages: mg
* OPUS readme: en-mg
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.3, chr-F: 0.565
testset: URL, BLEU: 35.5, chr-F: 0.548
| [
"### opus-mt-en-mg\n\n\n* source languages: en\n* target languages: mg\n* OPUS readme: en-mg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.3, chr-F: 0.565\ntestset: URL, BLEU: 35.5, chr-F: 0.548"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-mg\n\n\n* source languages: en\n* target languages: mg\n* OPUS readme: en-mg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.3, chr-F: 0.565\ntestset: URL, BLEU: 35.5, chr-F: 0.548"
] | [
51,
129
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-mg\n\n\n* source languages: en\n* target languages: mg\n* OPUS readme: en-mg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.3, chr-F: 0.565\ntestset: URL, BLEU: 35.5, chr-F: 0.548"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.