pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
translation | transformers |
### opus-mt-fr-ase
* source languages: fr
* target languages: ase
* OPUS readme: [fr-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ase/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ase/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ase | 38.5 | 0.545 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ase | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ase",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ase
* source languages: fr
* target languages: ase
* OPUS readme: fr-ase
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.5, chr-F: 0.545
| [
"### opus-mt-fr-ase\n\n\n* source languages: fr\n* target languages: ase\n* OPUS readme: fr-ase\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.5, chr-F: 0.545"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ase\n\n\n* source languages: fr\n* target languages: ase\n* OPUS readme: fr-ase\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.5, chr-F: 0.545"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ase\n\n\n* source languages: fr\n* target languages: ase\n* OPUS readme: fr-ase\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.5, chr-F: 0.545"
] |
translation | transformers |
### opus-mt-fr-bcl
* source languages: fr
* target languages: bcl
* OPUS readme: [fr-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bcl/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.bcl | 35.9 | 0.566 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-bcl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"bcl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-bcl
* source languages: fr
* target languages: bcl
* OPUS readme: fr-bcl
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.9, chr-F: 0.566
| [
"### opus-mt-fr-bcl\n\n\n* source languages: fr\n* target languages: bcl\n* OPUS readme: fr-bcl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.9, chr-F: 0.566"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-bcl\n\n\n* source languages: fr\n* target languages: bcl\n* OPUS readme: fr-bcl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.9, chr-F: 0.566"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-bcl\n\n\n* source languages: fr\n* target languages: bcl\n* OPUS readme: fr-bcl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.9, chr-F: 0.566"
] |
translation | transformers |
### opus-mt-fr-bem
* source languages: fr
* target languages: bem
* OPUS readme: [fr-bem](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bem/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bem/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bem/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bem/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.bem | 22.8 | 0.456 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-bem | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"bem",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-bem
* source languages: fr
* target languages: bem
* OPUS readme: fr-bem
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.8, chr-F: 0.456
| [
"### opus-mt-fr-bem\n\n\n* source languages: fr\n* target languages: bem\n* OPUS readme: fr-bem\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.456"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-bem\n\n\n* source languages: fr\n* target languages: bem\n* OPUS readme: fr-bem\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.456"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-bem\n\n\n* source languages: fr\n* target languages: bem\n* OPUS readme: fr-bem\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.456"
] |
translation | transformers |
### opus-mt-fr-ber
* source languages: fr
* target languages: ber
* OPUS readme: [fr-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ber/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ber/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ber/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ber/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.ber | 37.2 | 0.641 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ber | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ber",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ber #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ber
* source languages: fr
* target languages: ber
* OPUS readme: fr-ber
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.2, chr-F: 0.641
| [
"### opus-mt-fr-ber\n\n\n* source languages: fr\n* target languages: ber\n* OPUS readme: fr-ber\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.2, chr-F: 0.641"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ber #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ber\n\n\n* source languages: fr\n* target languages: ber\n* OPUS readme: fr-ber\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.2, chr-F: 0.641"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ber #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ber\n\n\n* source languages: fr\n* target languages: ber\n* OPUS readme: fr-ber\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.2, chr-F: 0.641"
] |
translation | transformers |
### fra-bul
* source group: French
* target group: Bulgarian
* OPUS readme: [fra-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-bul/README.md)
* model: transformer
* source language(s): fra
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.bul | 46.3 | 0.657 |
### System Info:
- hf_name: fra-bul
- source_languages: fra
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'bg']
- src_constituents: {'fra'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-bul/opus-2020-07-03.test.txt
- src_alpha3: fra
- tgt_alpha3: bul
- short_pair: fr-bg
- chrF2_score: 0.657
- bleu: 46.3
- brevity_penalty: 0.953
- ref_len: 3286.0
- src_name: French
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: fr
- tgt_alpha2: bg
- prefer_old: False
- long_pair: fra-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["fr", "bg"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-bg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr",
"bg"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### fra-bul
* source group: French
* target group: Bulgarian
* OPUS readme: fra-bul
* model: transformer
* source language(s): fra
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 46.3, chr-F: 0.657
### System Info:
* hf\_name: fra-bul
* source\_languages: fra
* target\_languages: bul
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fr', 'bg']
* src\_constituents: {'fra'}
* tgt\_constituents: {'bul', 'bul\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: fra
* tgt\_alpha3: bul
* short\_pair: fr-bg
* chrF2\_score: 0.657
* bleu: 46.3
* brevity\_penalty: 0.953
* ref\_len: 3286.0
* src\_name: French
* tgt\_name: Bulgarian
* train\_date: 2020-07-03
* src\_alpha2: fr
* tgt\_alpha2: bg
* prefer\_old: False
* long\_pair: fra-bul
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### fra-bul\n\n\n* source group: French\n* target group: Bulgarian\n* OPUS readme: fra-bul\n* model: transformer\n* source language(s): fra\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.3, chr-F: 0.657",
"### System Info:\n\n\n* hf\\_name: fra-bul\n* source\\_languages: fra\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'bg']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: bul\n* short\\_pair: fr-bg\n* chrF2\\_score: 0.657\n* bleu: 46.3\n* brevity\\_penalty: 0.953\n* ref\\_len: 3286.0\n* src\\_name: French\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: fr\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: fra-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### fra-bul\n\n\n* source group: French\n* target group: Bulgarian\n* OPUS readme: fra-bul\n* model: transformer\n* source language(s): fra\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.3, chr-F: 0.657",
"### System Info:\n\n\n* hf\\_name: fra-bul\n* source\\_languages: fra\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'bg']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: bul\n* short\\_pair: fr-bg\n* chrF2\\_score: 0.657\n* bleu: 46.3\n* brevity\\_penalty: 0.953\n* ref\\_len: 3286.0\n* src\\_name: French\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: fr\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: fra-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
130,
409
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### fra-bul\n\n\n* source group: French\n* target group: Bulgarian\n* OPUS readme: fra-bul\n* model: transformer\n* source language(s): fra\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.3, chr-F: 0.657### System Info:\n\n\n* hf\\_name: fra-bul\n* source\\_languages: fra\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'bg']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: bul\n* short\\_pair: fr-bg\n* chrF2\\_score: 0.657\n* bleu: 46.3\n* brevity\\_penalty: 0.953\n* ref\\_len: 3286.0\n* src\\_name: French\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: fr\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: fra-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-fr-bi
* source languages: fr
* target languages: bi
* OPUS readme: [fr-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.bi | 28.4 | 0.464 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-bi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"bi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-bi
* source languages: fr
* target languages: bi
* OPUS readme: fr-bi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.4, chr-F: 0.464
| [
"### opus-mt-fr-bi\n\n\n* source languages: fr\n* target languages: bi\n* OPUS readme: fr-bi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.464"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-bi\n\n\n* source languages: fr\n* target languages: bi\n* OPUS readme: fr-bi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.464"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-bi\n\n\n* source languages: fr\n* target languages: bi\n* OPUS readme: fr-bi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.464"
] |
translation | transformers |
### opus-mt-fr-bzs
* source languages: fr
* target languages: bzs
* OPUS readme: [fr-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.bzs | 30.2 | 0.477 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-bzs | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"bzs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bzs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-bzs
* source languages: fr
* target languages: bzs
* OPUS readme: fr-bzs
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.2, chr-F: 0.477
| [
"### opus-mt-fr-bzs\n\n\n* source languages: fr\n* target languages: bzs\n* OPUS readme: fr-bzs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.477"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bzs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-bzs\n\n\n* source languages: fr\n* target languages: bzs\n* OPUS readme: fr-bzs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.477"
] | [
53,
112
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #bzs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-bzs\n\n\n* source languages: fr\n* target languages: bzs\n* OPUS readme: fr-bzs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.477"
] |
translation | transformers |
### fra-cat
* source group: French
* target group: Catalan
* OPUS readme: [fra-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-cat/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.cat | 43.4 | 0.645 |
### System Info:
- hf_name: fra-cat
- source_languages: fra
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'ca']
- src_constituents: {'fra'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-cat/opus-2020-06-16.test.txt
- src_alpha3: fra
- tgt_alpha3: cat
- short_pair: fr-ca
- chrF2_score: 0.645
- bleu: 43.4
- brevity_penalty: 0.982
- ref_len: 5214.0
- src_name: French
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: fr
- tgt_alpha2: ca
- prefer_old: False
- long_pair: fra-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["fr", "ca"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ca | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr",
"ca"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### fra-cat
* source group: French
* target group: Catalan
* OPUS readme: fra-cat
* model: transformer-align
* source language(s): fra
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 43.4, chr-F: 0.645
### System Info:
* hf\_name: fra-cat
* source\_languages: fra
* target\_languages: cat
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fr', 'ca']
* src\_constituents: {'fra'}
* tgt\_constituents: {'cat'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: fra
* tgt\_alpha3: cat
* short\_pair: fr-ca
* chrF2\_score: 0.645
* bleu: 43.4
* brevity\_penalty: 0.982
* ref\_len: 5214.0
* src\_name: French
* tgt\_name: Catalan
* train\_date: 2020-06-16
* src\_alpha2: fr
* tgt\_alpha2: ca
* prefer\_old: False
* long\_pair: fra-cat
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### fra-cat\n\n\n* source group: French\n* target group: Catalan\n* OPUS readme: fra-cat\n* model: transformer-align\n* source language(s): fra\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.645",
"### System Info:\n\n\n* hf\\_name: fra-cat\n* source\\_languages: fra\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'ca']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: cat\n* short\\_pair: fr-ca\n* chrF2\\_score: 0.645\n* bleu: 43.4\n* brevity\\_penalty: 0.982\n* ref\\_len: 5214.0\n* src\\_name: French\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: fr\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: fra-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### fra-cat\n\n\n* source group: French\n* target group: Catalan\n* OPUS readme: fra-cat\n* model: transformer-align\n* source language(s): fra\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.645",
"### System Info:\n\n\n* hf\\_name: fra-cat\n* source\\_languages: fra\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'ca']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: cat\n* short\\_pair: fr-ca\n* chrF2\\_score: 0.645\n* bleu: 43.4\n* brevity\\_penalty: 0.982\n* ref\\_len: 5214.0\n* src\\_name: French\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: fr\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: fra-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
131,
391
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### fra-cat\n\n\n* source group: French\n* target group: Catalan\n* OPUS readme: fra-cat\n* model: transformer-align\n* source language(s): fra\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.645### System Info:\n\n\n* hf\\_name: fra-cat\n* source\\_languages: fra\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'ca']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: cat\n* short\\_pair: fr-ca\n* chrF2\\_score: 0.645\n* bleu: 43.4\n* brevity\\_penalty: 0.982\n* ref\\_len: 5214.0\n* src\\_name: French\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: fr\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: fra-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-fr-ceb
* source languages: fr
* target languages: ceb
* OPUS readme: [fr-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ceb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ceb/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ceb/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ceb/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ceb | 32.8 | 0.543 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ceb | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ceb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ceb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ceb
* source languages: fr
* target languages: ceb
* OPUS readme: fr-ceb
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.8, chr-F: 0.543
| [
"### opus-mt-fr-ceb\n\n\n* source languages: fr\n* target languages: ceb\n* OPUS readme: fr-ceb\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.8, chr-F: 0.543"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ceb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ceb\n\n\n* source languages: fr\n* target languages: ceb\n* OPUS readme: fr-ceb\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.8, chr-F: 0.543"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ceb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ceb\n\n\n* source languages: fr\n* target languages: ceb\n* OPUS readme: fr-ceb\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.8, chr-F: 0.543"
] |
translation | transformers |
### opus-mt-fr-crs
* source languages: fr
* target languages: crs
* OPUS readme: [fr-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-crs/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-crs/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-crs/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.crs | 31.6 | 0.492 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-crs | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"crs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #crs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-crs
* source languages: fr
* target languages: crs
* OPUS readme: fr-crs
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.6, chr-F: 0.492
| [
"### opus-mt-fr-crs\n\n\n* source languages: fr\n* target languages: crs\n* OPUS readme: fr-crs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.492"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #crs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-crs\n\n\n* source languages: fr\n* target languages: crs\n* OPUS readme: fr-crs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.492"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #crs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-crs\n\n\n* source languages: fr\n* target languages: crs\n* OPUS readme: fr-crs\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.492"
] |
translation | transformers |
### opus-mt-fr-de
* source languages: fr
* target languages: de
* OPUS readme: [fr-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-de/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-de/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-de/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| euelections_dev2019.transformer-align.fr | 26.4 | 0.571 |
| newssyscomb2009.fr.de | 22.1 | 0.524 |
| news-test2008.fr.de | 22.1 | 0.524 |
| newstest2009.fr.de | 21.6 | 0.520 |
| newstest2010.fr.de | 22.6 | 0.527 |
| newstest2011.fr.de | 21.5 | 0.518 |
| newstest2012.fr.de | 22.4 | 0.516 |
| newstest2013.fr.de | 24.2 | 0.532 |
| newstest2019-frde.fr.de | 27.9 | 0.595 |
| Tatoeba.fr.de | 49.1 | 0.676 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-de | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"fr",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #fr #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-de
* source languages: fr
* target languages: de
* OPUS readme: fr-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: euelections\_dev2019.URL, BLEU: 26.4, chr-F: 0.571
testset: URL, BLEU: 22.1, chr-F: 0.524
testset: URL, BLEU: 22.1, chr-F: 0.524
testset: URL, BLEU: 21.6, chr-F: 0.520
testset: URL, BLEU: 22.6, chr-F: 0.527
testset: URL, BLEU: 21.5, chr-F: 0.518
testset: URL, BLEU: 22.4, chr-F: 0.516
testset: URL, BLEU: 24.2, chr-F: 0.532
testset: URL, BLEU: 27.9, chr-F: 0.595
testset: URL, BLEU: 49.1, chr-F: 0.676
| [
"### opus-mt-fr-de\n\n\n* source languages: fr\n* target languages: de\n* OPUS readme: fr-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: euelections\\_dev2019.URL, BLEU: 26.4, chr-F: 0.571\ntestset: URL, BLEU: 22.1, chr-F: 0.524\ntestset: URL, BLEU: 22.1, chr-F: 0.524\ntestset: URL, BLEU: 21.6, chr-F: 0.520\ntestset: URL, BLEU: 22.6, chr-F: 0.527\ntestset: URL, BLEU: 21.5, chr-F: 0.518\ntestset: URL, BLEU: 22.4, chr-F: 0.516\ntestset: URL, BLEU: 24.2, chr-F: 0.532\ntestset: URL, BLEU: 27.9, chr-F: 0.595\ntestset: URL, BLEU: 49.1, chr-F: 0.676"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #fr #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-de\n\n\n* source languages: fr\n* target languages: de\n* OPUS readme: fr-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: euelections\\_dev2019.URL, BLEU: 26.4, chr-F: 0.571\ntestset: URL, BLEU: 22.1, chr-F: 0.524\ntestset: URL, BLEU: 22.1, chr-F: 0.524\ntestset: URL, BLEU: 21.6, chr-F: 0.520\ntestset: URL, BLEU: 22.6, chr-F: 0.527\ntestset: URL, BLEU: 21.5, chr-F: 0.518\ntestset: URL, BLEU: 22.4, chr-F: 0.516\ntestset: URL, BLEU: 24.2, chr-F: 0.532\ntestset: URL, BLEU: 27.9, chr-F: 0.595\ntestset: URL, BLEU: 49.1, chr-F: 0.676"
] | [
53,
321
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #fr #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-de\n\n\n* source languages: fr\n* target languages: de\n* OPUS readme: fr-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: euelections\\_dev2019.URL, BLEU: 26.4, chr-F: 0.571\ntestset: URL, BLEU: 22.1, chr-F: 0.524\ntestset: URL, BLEU: 22.1, chr-F: 0.524\ntestset: URL, BLEU: 21.6, chr-F: 0.520\ntestset: URL, BLEU: 22.6, chr-F: 0.527\ntestset: URL, BLEU: 21.5, chr-F: 0.518\ntestset: URL, BLEU: 22.4, chr-F: 0.516\ntestset: URL, BLEU: 24.2, chr-F: 0.532\ntestset: URL, BLEU: 27.9, chr-F: 0.595\ntestset: URL, BLEU: 49.1, chr-F: 0.676"
] |
translation | transformers |
### opus-mt-fr-ee
* source languages: fr
* target languages: ee
* OPUS readme: [fr-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ee/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ee/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ee/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ee/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ee | 26.3 | 0.466 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ee | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ee",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ee #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ee
* source languages: fr
* target languages: ee
* OPUS readme: fr-ee
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.3, chr-F: 0.466
| [
"### opus-mt-fr-ee\n\n\n* source languages: fr\n* target languages: ee\n* OPUS readme: fr-ee\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.466"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ee #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ee\n\n\n* source languages: fr\n* target languages: ee\n* OPUS readme: fr-ee\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.466"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ee #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ee\n\n\n* source languages: fr\n* target languages: ee\n* OPUS readme: fr-ee\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.466"
] |
translation | transformers |
### opus-mt-fr-efi
* source languages: fr
* target languages: efi
* OPUS readme: [fr-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-efi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-efi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.efi | 26.9 | 0.462 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-efi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"efi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #efi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-efi
* source languages: fr
* target languages: efi
* OPUS readme: fr-efi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.9, chr-F: 0.462
| [
"### opus-mt-fr-efi\n\n\n* source languages: fr\n* target languages: efi\n* OPUS readme: fr-efi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.462"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #efi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-efi\n\n\n* source languages: fr\n* target languages: efi\n* OPUS readme: fr-efi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.462"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #efi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-efi\n\n\n* source languages: fr\n* target languages: efi\n* OPUS readme: fr-efi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.462"
] |
translation | transformers |
### opus-mt-fr-el
* source languages: fr
* target languages: el
* OPUS readme: [fr-el](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-el/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-el/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-el/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-el/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.el | 56.2 | 0.719 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-el | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-el
* source languages: fr
* target languages: el
* OPUS readme: fr-el
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 56.2, chr-F: 0.719
| [
"### opus-mt-fr-el\n\n\n* source languages: fr\n* target languages: el\n* OPUS readme: fr-el\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.2, chr-F: 0.719"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-el\n\n\n* source languages: fr\n* target languages: el\n* OPUS readme: fr-el\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.2, chr-F: 0.719"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-el\n\n\n* source languages: fr\n* target languages: el\n* OPUS readme: fr-el\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.2, chr-F: 0.719"
] |
translation | transformers |
### opus-mt-fr-en
* source languages: fr
* target languages: en
* OPUS readme: [fr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdiscussdev2015-enfr.fr.en | 33.1 | 0.580 |
| newsdiscusstest2015-enfr.fr.en | 38.7 | 0.614 |
| newssyscomb2009.fr.en | 30.3 | 0.569 |
| news-test2008.fr.en | 26.2 | 0.542 |
| newstest2009.fr.en | 30.2 | 0.570 |
| newstest2010.fr.en | 32.2 | 0.590 |
| newstest2011.fr.en | 33.0 | 0.597 |
| newstest2012.fr.en | 32.8 | 0.591 |
| newstest2013.fr.en | 33.9 | 0.591 |
| newstest2014-fren.fr.en | 37.8 | 0.633 |
| Tatoeba.fr.en | 57.5 | 0.720 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-en | null | [
"transformers",
"pytorch",
"tf",
"jax",
"marian",
"text2text-generation",
"translation",
"fr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #jax #marian #text2text-generation #translation #fr #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-en
* source languages: fr
* target languages: en
* OPUS readme: fr-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.1, chr-F: 0.580
testset: URL, BLEU: 38.7, chr-F: 0.614
testset: URL, BLEU: 30.3, chr-F: 0.569
testset: URL, BLEU: 26.2, chr-F: 0.542
testset: URL, BLEU: 30.2, chr-F: 0.570
testset: URL, BLEU: 32.2, chr-F: 0.590
testset: URL, BLEU: 33.0, chr-F: 0.597
testset: URL, BLEU: 32.8, chr-F: 0.591
testset: URL, BLEU: 33.9, chr-F: 0.591
testset: URL, BLEU: 37.8, chr-F: 0.633
testset: URL, BLEU: 57.5, chr-F: 0.720
| [
"### opus-mt-fr-en\n\n\n* source languages: fr\n* target languages: en\n* OPUS readme: fr-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.580\ntestset: URL, BLEU: 38.7, chr-F: 0.614\ntestset: URL, BLEU: 30.3, chr-F: 0.569\ntestset: URL, BLEU: 26.2, chr-F: 0.542\ntestset: URL, BLEU: 30.2, chr-F: 0.570\ntestset: URL, BLEU: 32.2, chr-F: 0.590\ntestset: URL, BLEU: 33.0, chr-F: 0.597\ntestset: URL, BLEU: 32.8, chr-F: 0.591\ntestset: URL, BLEU: 33.9, chr-F: 0.591\ntestset: URL, BLEU: 37.8, chr-F: 0.633\ntestset: URL, BLEU: 57.5, chr-F: 0.720"
] | [
"TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #fr #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-en\n\n\n* source languages: fr\n* target languages: en\n* OPUS readme: fr-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.580\ntestset: URL, BLEU: 38.7, chr-F: 0.614\ntestset: URL, BLEU: 30.3, chr-F: 0.569\ntestset: URL, BLEU: 26.2, chr-F: 0.542\ntestset: URL, BLEU: 30.2, chr-F: 0.570\ntestset: URL, BLEU: 32.2, chr-F: 0.590\ntestset: URL, BLEU: 33.0, chr-F: 0.597\ntestset: URL, BLEU: 32.8, chr-F: 0.591\ntestset: URL, BLEU: 33.9, chr-F: 0.591\ntestset: URL, BLEU: 37.8, chr-F: 0.633\ntestset: URL, BLEU: 57.5, chr-F: 0.720"
] | [
53,
332
] | [
"TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #fr #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-en\n\n\n* source languages: fr\n* target languages: en\n* OPUS readme: fr-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.580\ntestset: URL, BLEU: 38.7, chr-F: 0.614\ntestset: URL, BLEU: 30.3, chr-F: 0.569\ntestset: URL, BLEU: 26.2, chr-F: 0.542\ntestset: URL, BLEU: 30.2, chr-F: 0.570\ntestset: URL, BLEU: 32.2, chr-F: 0.590\ntestset: URL, BLEU: 33.0, chr-F: 0.597\ntestset: URL, BLEU: 32.8, chr-F: 0.591\ntestset: URL, BLEU: 33.9, chr-F: 0.591\ntestset: URL, BLEU: 37.8, chr-F: 0.633\ntestset: URL, BLEU: 57.5, chr-F: 0.720"
] |
translation | transformers |
### opus-mt-fr-eo
* source languages: fr
* target languages: eo
* OPUS readme: [fr-eo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-eo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-eo/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-eo/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-eo/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.eo | 52.0 | 0.695 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-eo
* source languages: fr
* target languages: eo
* OPUS readme: fr-eo
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 52.0, chr-F: 0.695
| [
"### opus-mt-fr-eo\n\n\n* source languages: fr\n* target languages: eo\n* OPUS readme: fr-eo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.0, chr-F: 0.695"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-eo\n\n\n* source languages: fr\n* target languages: eo\n* OPUS readme: fr-eo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.0, chr-F: 0.695"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-eo\n\n\n* source languages: fr\n* target languages: eo\n* OPUS readme: fr-eo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.0, chr-F: 0.695"
] |
translation | transformers |
### opus-mt-fr-es
* source languages: fr
* target languages: es
* OPUS readme: [fr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-es/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-es/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-es/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.fr.es | 34.3 | 0.601 |
| news-test2008.fr.es | 32.5 | 0.583 |
| newstest2009.fr.es | 31.6 | 0.586 |
| newstest2010.fr.es | 36.5 | 0.616 |
| newstest2011.fr.es | 38.3 | 0.622 |
| newstest2012.fr.es | 38.1 | 0.619 |
| newstest2013.fr.es | 34.0 | 0.587 |
| Tatoeba.fr.es | 53.2 | 0.709 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-es
* source languages: fr
* target languages: es
* OPUS readme: fr-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.3, chr-F: 0.601
testset: URL, BLEU: 32.5, chr-F: 0.583
testset: URL, BLEU: 31.6, chr-F: 0.586
testset: URL, BLEU: 36.5, chr-F: 0.616
testset: URL, BLEU: 38.3, chr-F: 0.622
testset: URL, BLEU: 38.1, chr-F: 0.619
testset: URL, BLEU: 34.0, chr-F: 0.587
testset: URL, BLEU: 53.2, chr-F: 0.709
| [
"### opus-mt-fr-es\n\n\n* source languages: fr\n* target languages: es\n* OPUS readme: fr-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.3, chr-F: 0.601\ntestset: URL, BLEU: 32.5, chr-F: 0.583\ntestset: URL, BLEU: 31.6, chr-F: 0.586\ntestset: URL, BLEU: 36.5, chr-F: 0.616\ntestset: URL, BLEU: 38.3, chr-F: 0.622\ntestset: URL, BLEU: 38.1, chr-F: 0.619\ntestset: URL, BLEU: 34.0, chr-F: 0.587\ntestset: URL, BLEU: 53.2, chr-F: 0.709"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-es\n\n\n* source languages: fr\n* target languages: es\n* OPUS readme: fr-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.3, chr-F: 0.601\ntestset: URL, BLEU: 32.5, chr-F: 0.583\ntestset: URL, BLEU: 31.6, chr-F: 0.586\ntestset: URL, BLEU: 36.5, chr-F: 0.616\ntestset: URL, BLEU: 38.3, chr-F: 0.622\ntestset: URL, BLEU: 38.1, chr-F: 0.619\ntestset: URL, BLEU: 34.0, chr-F: 0.587\ntestset: URL, BLEU: 53.2, chr-F: 0.709"
] | [
51,
267
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-es\n\n\n* source languages: fr\n* target languages: es\n* OPUS readme: fr-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.3, chr-F: 0.601\ntestset: URL, BLEU: 32.5, chr-F: 0.583\ntestset: URL, BLEU: 31.6, chr-F: 0.586\ntestset: URL, BLEU: 36.5, chr-F: 0.616\ntestset: URL, BLEU: 38.3, chr-F: 0.622\ntestset: URL, BLEU: 38.1, chr-F: 0.619\ntestset: URL, BLEU: 34.0, chr-F: 0.587\ntestset: URL, BLEU: 53.2, chr-F: 0.709"
] |
translation | transformers |
### opus-mt-fr-fj
* source languages: fr
* target languages: fj
* OPUS readme: [fr-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-fj/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.fj | 27.4 | 0.487 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-fj | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"fj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #fj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-fj
* source languages: fr
* target languages: fj
* OPUS readme: fr-fj
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.4, chr-F: 0.487
| [
"### opus-mt-fr-fj\n\n\n* source languages: fr\n* target languages: fj\n* OPUS readme: fr-fj\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.487"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #fj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-fj\n\n\n* source languages: fr\n* target languages: fj\n* OPUS readme: fr-fj\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.487"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #fj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-fj\n\n\n* source languages: fr\n* target languages: fj\n* OPUS readme: fr-fj\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.487"
] |
translation | transformers |
### opus-mt-fr-gaa
* source languages: fr
* target languages: gaa
* OPUS readme: [fr-gaa](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-gaa/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gaa/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.gaa | 27.8 | 0.473 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-gaa | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"gaa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #gaa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-gaa
* source languages: fr
* target languages: gaa
* OPUS readme: fr-gaa
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.8, chr-F: 0.473
| [
"### opus-mt-fr-gaa\n\n\n* source languages: fr\n* target languages: gaa\n* OPUS readme: fr-gaa\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.473"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #gaa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-gaa\n\n\n* source languages: fr\n* target languages: gaa\n* OPUS readme: fr-gaa\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.473"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #gaa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-gaa\n\n\n* source languages: fr\n* target languages: gaa\n* OPUS readme: fr-gaa\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.473"
] |
translation | transformers |
### opus-mt-fr-gil
* source languages: fr
* target languages: gil
* OPUS readme: [fr-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-gil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.gil | 27.9 | 0.499 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-gil | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"gil",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #gil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-gil
* source languages: fr
* target languages: gil
* OPUS readme: fr-gil
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.9, chr-F: 0.499
| [
"### opus-mt-fr-gil\n\n\n* source languages: fr\n* target languages: gil\n* OPUS readme: fr-gil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.499"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #gil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-gil\n\n\n* source languages: fr\n* target languages: gil\n* OPUS readme: fr-gil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.499"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #gil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-gil\n\n\n* source languages: fr\n* target languages: gil\n* OPUS readme: fr-gil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.499"
] |
translation | transformers |
### opus-mt-fr-guw
* source languages: fr
* target languages: guw
* OPUS readme: [fr-guw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-guw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-guw/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.guw | 31.4 | 0.505 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-guw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"guw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #guw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-guw
* source languages: fr
* target languages: guw
* OPUS readme: fr-guw
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.4, chr-F: 0.505
| [
"### opus-mt-fr-guw\n\n\n* source languages: fr\n* target languages: guw\n* OPUS readme: fr-guw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.505"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #guw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-guw\n\n\n* source languages: fr\n* target languages: guw\n* OPUS readme: fr-guw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.505"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #guw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-guw\n\n\n* source languages: fr\n* target languages: guw\n* OPUS readme: fr-guw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.505"
] |
translation | transformers |
### opus-mt-fr-ha
* source languages: fr
* target languages: ha
* OPUS readme: [fr-ha](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ha/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ha/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ha | 24.4 | 0.447 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ha | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ha
* source languages: fr
* target languages: ha
* OPUS readme: fr-ha
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.4, chr-F: 0.447
| [
"### opus-mt-fr-ha\n\n\n* source languages: fr\n* target languages: ha\n* OPUS readme: fr-ha\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.447"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ha\n\n\n* source languages: fr\n* target languages: ha\n* OPUS readme: fr-ha\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.447"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ha #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ha\n\n\n* source languages: fr\n* target languages: ha\n* OPUS readme: fr-ha\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.447"
] |
translation | transformers | ### fr-he
* source group: French
* target group: Hebrew
* OPUS readme: [fra-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-heb/README.md)
* model: transformer
* source language(s): fra
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.heb | 39.2 | 0.598 |
### System Info:
- hf_name: fr-he
- source_languages: fra
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'he']
- src_constituents: ('French', {'fra'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fra-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-heb/opus-2020-12-10.test.txt
- src_alpha3: fra
- tgt_alpha3: heb
- chrF2_score: 0.598
- bleu: 39.2
- brevity_penalty: 1.0
- ref_len: 20655.0
- src_name: French
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: fr
- tgt_alpha2: he
- prefer_old: False
- short_pair: fr-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:02 | {"language": ["fr", "he"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-he | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr",
"he"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### fr-he
* source group: French
* target group: Hebrew
* OPUS readme: fra-heb
* model: transformer
* source language(s): fra
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.2, chr-F: 0.598
### System Info:
* hf\_name: fr-he
* source\_languages: fra
* target\_languages: heb
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fr', 'he']
* src\_constituents: ('French', {'fra'})
* tgt\_constituents: ('Hebrew', {'heb'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: fra-heb
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: fra
* tgt\_alpha3: heb
* chrF2\_score: 0.598
* bleu: 39.2
* brevity\_penalty: 1.0
* ref\_len: 20655.0
* src\_name: French
* tgt\_name: Hebrew
* train\_date: 2020-12-10 00:00:00
* src\_alpha2: fr
* tgt\_alpha2: he
* prefer\_old: False
* short\_pair: fr-he
* helsinki\_git\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
* transformers\_git\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
* port\_machine: URL
* port\_time: 2020-12-11-16:02
| [
"### fr-he\n\n\n* source group: French\n* target group: Hebrew\n* OPUS readme: fra-heb\n* model: transformer\n* source language(s): fra\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.2, chr-F: 0.598",
"### System Info:\n\n\n* hf\\_name: fr-he\n* source\\_languages: fra\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'he']\n* src\\_constituents: ('French', {'fra'})\n* tgt\\_constituents: ('Hebrew', {'heb'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: fra-heb\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: heb\n* chrF2\\_score: 0.598\n* bleu: 39.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 20655.0\n* src\\_name: French\n* tgt\\_name: Hebrew\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: fr\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* short\\_pair: fr-he\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:02"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### fr-he\n\n\n* source group: French\n* target group: Hebrew\n* OPUS readme: fra-heb\n* model: transformer\n* source language(s): fra\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.2, chr-F: 0.598",
"### System Info:\n\n\n* hf\\_name: fr-he\n* source\\_languages: fra\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'he']\n* src\\_constituents: ('French', {'fra'})\n* tgt\\_constituents: ('Hebrew', {'heb'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: fra-heb\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: heb\n* chrF2\\_score: 0.598\n* bleu: 39.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 20655.0\n* src\\_name: French\n* tgt\\_name: Hebrew\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: fr\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* short\\_pair: fr-he\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:02"
] | [
51,
129,
413
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### fr-he\n\n\n* source group: French\n* target group: Hebrew\n* OPUS readme: fra-heb\n* model: transformer\n* source language(s): fra\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.2, chr-F: 0.598### System Info:\n\n\n* hf\\_name: fr-he\n* source\\_languages: fra\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'he']\n* src\\_constituents: ('French', {'fra'})\n* tgt\\_constituents: ('Hebrew', {'heb'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: fra-heb\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: heb\n* chrF2\\_score: 0.598\n* bleu: 39.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 20655.0\n* src\\_name: French\n* tgt\\_name: Hebrew\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: fr\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* short\\_pair: fr-he\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:02"
] |
translation | transformers |
### opus-mt-fr-hil
* source languages: fr
* target languages: hil
* OPUS readme: [fr-hil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hil/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hil/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hil/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hil/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.hil | 34.7 | 0.559 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-hil | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"hil",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #hil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-hil
* source languages: fr
* target languages: hil
* OPUS readme: fr-hil
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.7, chr-F: 0.559
| [
"### opus-mt-fr-hil\n\n\n* source languages: fr\n* target languages: hil\n* OPUS readme: fr-hil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.7, chr-F: 0.559"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #hil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-hil\n\n\n* source languages: fr\n* target languages: hil\n* OPUS readme: fr-hil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.7, chr-F: 0.559"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #hil #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-hil\n\n\n* source languages: fr\n* target languages: hil\n* OPUS readme: fr-hil\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.7, chr-F: 0.559"
] |
translation | transformers |
### opus-mt-fr-ho
* source languages: fr
* target languages: ho
* OPUS readme: [fr-ho](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ho/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ho/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ho | 25.4 | 0.480 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ho | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ho",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ho #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ho
* source languages: fr
* target languages: ho
* OPUS readme: fr-ho
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.4, chr-F: 0.480
| [
"### opus-mt-fr-ho\n\n\n* source languages: fr\n* target languages: ho\n* OPUS readme: fr-ho\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.4, chr-F: 0.480"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ho #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ho\n\n\n* source languages: fr\n* target languages: ho\n* OPUS readme: fr-ho\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.4, chr-F: 0.480"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ho #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ho\n\n\n* source languages: fr\n* target languages: ho\n* OPUS readme: fr-ho\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.4, chr-F: 0.480"
] |
translation | transformers |
### opus-mt-fr-hr
* source languages: fr
* target languages: hr
* OPUS readme: [fr-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.hr | 20.7 | 0.442 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-hr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"hr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #hr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-hr
* source languages: fr
* target languages: hr
* OPUS readme: fr-hr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.7, chr-F: 0.442
| [
"### opus-mt-fr-hr\n\n\n* source languages: fr\n* target languages: hr\n* OPUS readme: fr-hr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.7, chr-F: 0.442"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #hr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-hr\n\n\n* source languages: fr\n* target languages: hr\n* OPUS readme: fr-hr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.7, chr-F: 0.442"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #hr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-hr\n\n\n* source languages: fr\n* target languages: hr\n* OPUS readme: fr-hr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.7, chr-F: 0.442"
] |
translation | transformers |
### opus-mt-fr-ht
* source languages: fr
* target languages: ht
* OPUS readme: [fr-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ht/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ht | 29.2 | 0.461 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ht | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ht",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ht #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ht
* source languages: fr
* target languages: ht
* OPUS readme: fr-ht
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.2, chr-F: 0.461
| [
"### opus-mt-fr-ht\n\n\n* source languages: fr\n* target languages: ht\n* OPUS readme: fr-ht\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.2, chr-F: 0.461"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ht #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ht\n\n\n* source languages: fr\n* target languages: ht\n* OPUS readme: fr-ht\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.2, chr-F: 0.461"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ht #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ht\n\n\n* source languages: fr\n* target languages: ht\n* OPUS readme: fr-ht\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.2, chr-F: 0.461"
] |
translation | transformers |
### opus-mt-fr-hu
* source languages: fr
* target languages: hu
* OPUS readme: [fr-hu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hu/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.hu | 41.3 | 0.629 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-hu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"hu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-hu
* source languages: fr
* target languages: hu
* OPUS readme: fr-hu
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 41.3, chr-F: 0.629
| [
"### opus-mt-fr-hu\n\n\n* source languages: fr\n* target languages: hu\n* OPUS readme: fr-hu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.3, chr-F: 0.629"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-hu\n\n\n* source languages: fr\n* target languages: hu\n* OPUS readme: fr-hu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.3, chr-F: 0.629"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-hu\n\n\n* source languages: fr\n* target languages: hu\n* OPUS readme: fr-hu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.3, chr-F: 0.629"
] |
translation | transformers |
### opus-mt-fr-id
* source languages: fr
* target languages: id
* OPUS readme: [fr-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-id/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-id/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-id/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-id/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.id | 37.2 | 0.636 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-id | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"id",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #id #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-id
* source languages: fr
* target languages: id
* OPUS readme: fr-id
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.2, chr-F: 0.636
| [
"### opus-mt-fr-id\n\n\n* source languages: fr\n* target languages: id\n* OPUS readme: fr-id\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.2, chr-F: 0.636"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #id #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-id\n\n\n* source languages: fr\n* target languages: id\n* OPUS readme: fr-id\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.2, chr-F: 0.636"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #id #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-id\n\n\n* source languages: fr\n* target languages: id\n* OPUS readme: fr-id\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.2, chr-F: 0.636"
] |
translation | transformers |
### opus-mt-fr-ig
* source languages: fr
* target languages: ig
* OPUS readme: [fr-ig](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ig/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ig/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ig | 29.0 | 0.445 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ig | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ig",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ig
* source languages: fr
* target languages: ig
* OPUS readme: fr-ig
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.0, chr-F: 0.445
| [
"### opus-mt-fr-ig\n\n\n* source languages: fr\n* target languages: ig\n* OPUS readme: fr-ig\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.445"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ig\n\n\n* source languages: fr\n* target languages: ig\n* OPUS readme: fr-ig\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.445"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ig #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ig\n\n\n* source languages: fr\n* target languages: ig\n* OPUS readme: fr-ig\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.445"
] |
translation | transformers |
### opus-mt-fr-ilo
* source languages: fr
* target languages: ilo
* OPUS readme: [fr-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ilo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ilo/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ilo | 30.6 | 0.528 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ilo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ilo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ilo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ilo
* source languages: fr
* target languages: ilo
* OPUS readme: fr-ilo
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.6, chr-F: 0.528
| [
"### opus-mt-fr-ilo\n\n\n* source languages: fr\n* target languages: ilo\n* OPUS readme: fr-ilo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.6, chr-F: 0.528"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ilo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ilo\n\n\n* source languages: fr\n* target languages: ilo\n* OPUS readme: fr-ilo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.6, chr-F: 0.528"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ilo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ilo\n\n\n* source languages: fr\n* target languages: ilo\n* OPUS readme: fr-ilo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.6, chr-F: 0.528"
] |
translation | transformers |
### opus-mt-fr-iso
* source languages: fr
* target languages: iso
* OPUS readme: [fr-iso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-iso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-iso/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.iso | 26.7 | 0.429 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-iso | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"iso",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #iso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-iso
* source languages: fr
* target languages: iso
* OPUS readme: fr-iso
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.7, chr-F: 0.429
| [
"### opus-mt-fr-iso\n\n\n* source languages: fr\n* target languages: iso\n* OPUS readme: fr-iso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.429"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #iso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-iso\n\n\n* source languages: fr\n* target languages: iso\n* OPUS readme: fr-iso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.429"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #iso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-iso\n\n\n* source languages: fr\n* target languages: iso\n* OPUS readme: fr-iso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.429"
] |
translation | transformers |
### opus-mt-fr-kg
* source languages: fr
* target languages: kg
* OPUS readme: [fr-kg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kg/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kg | 30.4 | 0.523 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-kg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"kg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #kg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-kg
* source languages: fr
* target languages: kg
* OPUS readme: fr-kg
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.4, chr-F: 0.523
| [
"### opus-mt-fr-kg\n\n\n* source languages: fr\n* target languages: kg\n* OPUS readme: fr-kg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.523"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #kg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-kg\n\n\n* source languages: fr\n* target languages: kg\n* OPUS readme: fr-kg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.523"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #kg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-kg\n\n\n* source languages: fr\n* target languages: kg\n* OPUS readme: fr-kg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.523"
] |
translation | transformers |
### opus-mt-fr-kqn
* source languages: fr
* target languages: kqn
* OPUS readme: [fr-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kqn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kqn/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kqn | 23.3 | 0.469 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-kqn | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"kqn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #kqn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-kqn
* source languages: fr
* target languages: kqn
* OPUS readme: fr-kqn
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.3, chr-F: 0.469
| [
"### opus-mt-fr-kqn\n\n\n* source languages: fr\n* target languages: kqn\n* OPUS readme: fr-kqn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.469"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #kqn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-kqn\n\n\n* source languages: fr\n* target languages: kqn\n* OPUS readme: fr-kqn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.469"
] | [
53,
112
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #kqn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-kqn\n\n\n* source languages: fr\n* target languages: kqn\n* OPUS readme: fr-kqn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.469"
] |
translation | transformers |
### opus-mt-fr-kwy
* source languages: fr
* target languages: kwy
* OPUS readme: [fr-kwy](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kwy/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-kwy/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.kwy | 22.5 | 0.428 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-kwy | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"kwy",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #kwy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-kwy
* source languages: fr
* target languages: kwy
* OPUS readme: fr-kwy
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.5, chr-F: 0.428
| [
"### opus-mt-fr-kwy\n\n\n* source languages: fr\n* target languages: kwy\n* OPUS readme: fr-kwy\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.5, chr-F: 0.428"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #kwy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-kwy\n\n\n* source languages: fr\n* target languages: kwy\n* OPUS readme: fr-kwy\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.5, chr-F: 0.428"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #kwy #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-kwy\n\n\n* source languages: fr\n* target languages: kwy\n* OPUS readme: fr-kwy\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.5, chr-F: 0.428"
] |
translation | transformers |
### opus-mt-fr-lg
* source languages: fr
* target languages: lg
* OPUS readme: [fr-lg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lg/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lg | 21.7 | 0.454 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-lg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"lg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-lg
* source languages: fr
* target languages: lg
* OPUS readme: fr-lg
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.7, chr-F: 0.454
| [
"### opus-mt-fr-lg\n\n\n* source languages: fr\n* target languages: lg\n* OPUS readme: fr-lg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.454"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-lg\n\n\n* source languages: fr\n* target languages: lg\n* OPUS readme: fr-lg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.454"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-lg\n\n\n* source languages: fr\n* target languages: lg\n* OPUS readme: fr-lg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.454"
] |
translation | transformers |
### opus-mt-fr-ln
* source languages: fr
* target languages: ln
* OPUS readme: [fr-ln](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ln/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ln/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ln/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ln/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ln | 30.5 | 0.527 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ln | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ln",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ln #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ln
* source languages: fr
* target languages: ln
* OPUS readme: fr-ln
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.5, chr-F: 0.527
| [
"### opus-mt-fr-ln\n\n\n* source languages: fr\n* target languages: ln\n* OPUS readme: fr-ln\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.5, chr-F: 0.527"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ln #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ln\n\n\n* source languages: fr\n* target languages: ln\n* OPUS readme: fr-ln\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.5, chr-F: 0.527"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ln #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ln\n\n\n* source languages: fr\n* target languages: ln\n* OPUS readme: fr-ln\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.5, chr-F: 0.527"
] |
translation | transformers |
### opus-mt-fr-loz
* source languages: fr
* target languages: loz
* OPUS readme: [fr-loz](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-loz/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-loz/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-loz/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-loz/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.loz | 30.0 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-loz | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"loz",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #loz #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-loz
* source languages: fr
* target languages: loz
* OPUS readme: fr-loz
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.0, chr-F: 0.498
| [
"### opus-mt-fr-loz\n\n\n* source languages: fr\n* target languages: loz\n* OPUS readme: fr-loz\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.498"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #loz #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-loz\n\n\n* source languages: fr\n* target languages: loz\n* OPUS readme: fr-loz\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.498"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #loz #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-loz\n\n\n* source languages: fr\n* target languages: loz\n* OPUS readme: fr-loz\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.498"
] |
translation | transformers |
### opus-mt-fr-lu
* source languages: fr
* target languages: lu
* OPUS readme: [fr-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lu/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lu | 25.5 | 0.471 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-lu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"lu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-lu
* source languages: fr
* target languages: lu
* OPUS readme: fr-lu
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.5, chr-F: 0.471
| [
"### opus-mt-fr-lu\n\n\n* source languages: fr\n* target languages: lu\n* OPUS readme: fr-lu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.471"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-lu\n\n\n* source languages: fr\n* target languages: lu\n* OPUS readme: fr-lu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.471"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-lu\n\n\n* source languages: fr\n* target languages: lu\n* OPUS readme: fr-lu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.471"
] |
translation | transformers |
### opus-mt-fr-lua
* source languages: fr
* target languages: lua
* OPUS readme: [fr-lua](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lua/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lua/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lua/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lua/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lua | 27.3 | 0.496 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-lua | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"lua",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lua #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-lua
* source languages: fr
* target languages: lua
* OPUS readme: fr-lua
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.3, chr-F: 0.496
| [
"### opus-mt-fr-lua\n\n\n* source languages: fr\n* target languages: lua\n* OPUS readme: fr-lua\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.496"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lua #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-lua\n\n\n* source languages: fr\n* target languages: lua\n* OPUS readme: fr-lua\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.496"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lua #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-lua\n\n\n* source languages: fr\n* target languages: lua\n* OPUS readme: fr-lua\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.496"
] |
translation | transformers |
### opus-mt-fr-lue
* source languages: fr
* target languages: lue
* OPUS readme: [fr-lue](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lue/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lue/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lue | 23.1 | 0.485 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-lue | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"lue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-lue
* source languages: fr
* target languages: lue
* OPUS readme: fr-lue
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.1, chr-F: 0.485
| [
"### opus-mt-fr-lue\n\n\n* source languages: fr\n* target languages: lue\n* OPUS readme: fr-lue\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.485"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-lue\n\n\n* source languages: fr\n* target languages: lue\n* OPUS readme: fr-lue\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.485"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-lue\n\n\n* source languages: fr\n* target languages: lue\n* OPUS readme: fr-lue\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.485"
] |
translation | transformers |
### opus-mt-fr-lus
* source languages: fr
* target languages: lus
* OPUS readme: [fr-lus](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-lus/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-lus/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lus/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-lus/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.lus | 25.5 | 0.455 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-lus | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"lus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lus #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-lus
* source languages: fr
* target languages: lus
* OPUS readme: fr-lus
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.5, chr-F: 0.455
| [
"### opus-mt-fr-lus\n\n\n* source languages: fr\n* target languages: lus\n* OPUS readme: fr-lus\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.455"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lus #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-lus\n\n\n* source languages: fr\n* target languages: lus\n* OPUS readme: fr-lus\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.455"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #lus #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-lus\n\n\n* source languages: fr\n* target languages: lus\n* OPUS readme: fr-lus\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.455"
] |
translation | transformers |
### opus-mt-fr-mfe
* source languages: fr
* target languages: mfe
* OPUS readme: [fr-mfe](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mfe/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mfe/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mfe | 26.1 | 0.451 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-mfe | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"mfe",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mfe #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-mfe
* source languages: fr
* target languages: mfe
* OPUS readme: fr-mfe
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.1, chr-F: 0.451
| [
"### opus-mt-fr-mfe\n\n\n* source languages: fr\n* target languages: mfe\n* OPUS readme: fr-mfe\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.451"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mfe #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-mfe\n\n\n* source languages: fr\n* target languages: mfe\n* OPUS readme: fr-mfe\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.451"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mfe #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-mfe\n\n\n* source languages: fr\n* target languages: mfe\n* OPUS readme: fr-mfe\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.451"
] |
translation | transformers |
### opus-mt-fr-mh
* source languages: fr
* target languages: mh
* OPUS readme: [fr-mh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mh/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mh/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mh/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mh | 21.7 | 0.399 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-mh | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"mh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-mh
* source languages: fr
* target languages: mh
* OPUS readme: fr-mh
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.7, chr-F: 0.399
| [
"### opus-mt-fr-mh\n\n\n* source languages: fr\n* target languages: mh\n* OPUS readme: fr-mh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.399"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-mh\n\n\n* source languages: fr\n* target languages: mh\n* OPUS readme: fr-mh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.399"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-mh\n\n\n* source languages: fr\n* target languages: mh\n* OPUS readme: fr-mh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.399"
] |
translation | transformers |
### opus-mt-fr-mos
* source languages: fr
* target languages: mos
* OPUS readme: [fr-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mos/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mos/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mos/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mos/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mos | 21.1 | 0.353 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-mos | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"mos",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mos #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-mos
* source languages: fr
* target languages: mos
* OPUS readme: fr-mos
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.1, chr-F: 0.353
| [
"### opus-mt-fr-mos\n\n\n* source languages: fr\n* target languages: mos\n* OPUS readme: fr-mos\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.1, chr-F: 0.353"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mos #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-mos\n\n\n* source languages: fr\n* target languages: mos\n* OPUS readme: fr-mos\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.1, chr-F: 0.353"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mos #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-mos\n\n\n* source languages: fr\n* target languages: mos\n* OPUS readme: fr-mos\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.1, chr-F: 0.353"
] |
translation | transformers |
### fra-msa
* source group: French
* target group: Malay (macrolanguage)
* OPUS readme: [fra-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-msa/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.msa | 35.3 | 0.617 |
### System Info:
- hf_name: fra-msa
- source_languages: fra
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'ms']
- src_constituents: {'fra'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-msa/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: msa
- short_pair: fr-ms
- chrF2_score: 0.617
- bleu: 35.3
- brevity_penalty: 0.978
- ref_len: 6696.0
- src_name: French
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: ms
- prefer_old: False
- long_pair: fra-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["fr", "ms"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ms | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ms",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr",
"ms"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### fra-msa
* source group: French
* target group: Malay (macrolanguage)
* OPUS readme: fra-msa
* model: transformer-align
* source language(s): fra
* target language(s): ind zsm\_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.3, chr-F: 0.617
### System Info:
* hf\_name: fra-msa
* source\_languages: fra
* target\_languages: msa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fr', 'ms']
* src\_constituents: {'fra'}
* tgt\_constituents: {'zsm\_Latn', 'ind', 'max\_Latn', 'zlm\_Latn', 'min'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: fra
* tgt\_alpha3: msa
* short\_pair: fr-ms
* chrF2\_score: 0.617
* bleu: 35.3
* brevity\_penalty: 0.978
* ref\_len: 6696.0
* src\_name: French
* tgt\_name: Malay (macrolanguage)
* train\_date: 2020-06-17
* src\_alpha2: fr
* tgt\_alpha2: ms
* prefer\_old: False
* long\_pair: fra-msa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### fra-msa\n\n\n* source group: French\n* target group: Malay (macrolanguage)\n* OPUS readme: fra-msa\n* model: transformer-align\n* source language(s): fra\n* target language(s): ind zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.617",
"### System Info:\n\n\n* hf\\_name: fra-msa\n* source\\_languages: fra\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'ms']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: msa\n* short\\_pair: fr-ms\n* chrF2\\_score: 0.617\n* bleu: 35.3\n* brevity\\_penalty: 0.978\n* ref\\_len: 6696.0\n* src\\_name: French\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: fra-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### fra-msa\n\n\n* source group: French\n* target group: Malay (macrolanguage)\n* OPUS readme: fra-msa\n* model: transformer-align\n* source language(s): fra\n* target language(s): ind zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.617",
"### System Info:\n\n\n* hf\\_name: fra-msa\n* source\\_languages: fra\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'ms']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: msa\n* short\\_pair: fr-ms\n* chrF2\\_score: 0.617\n* bleu: 35.3\n* brevity\\_penalty: 0.978\n* ref\\_len: 6696.0\n* src\\_name: French\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: fra-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
173,
434
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### fra-msa\n\n\n* source group: French\n* target group: Malay (macrolanguage)\n* OPUS readme: fra-msa\n* model: transformer-align\n* source language(s): fra\n* target language(s): ind zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.617### System Info:\n\n\n* hf\\_name: fra-msa\n* source\\_languages: fra\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'ms']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: msa\n* short\\_pair: fr-ms\n* chrF2\\_score: 0.617\n* bleu: 35.3\n* brevity\\_penalty: 0.978\n* ref\\_len: 6696.0\n* src\\_name: French\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: fra-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-fr-mt
* source languages: fr
* target languages: mt
* OPUS readme: [fr-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-mt/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.mt | 28.7 | 0.466 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-mt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"mt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-mt
* source languages: fr
* target languages: mt
* OPUS readme: fr-mt
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.7, chr-F: 0.466
| [
"### opus-mt-fr-mt\n\n\n* source languages: fr\n* target languages: mt\n* OPUS readme: fr-mt\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.7, chr-F: 0.466"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-mt\n\n\n* source languages: fr\n* target languages: mt\n* OPUS readme: fr-mt\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.7, chr-F: 0.466"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #mt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-mt\n\n\n* source languages: fr\n* target languages: mt\n* OPUS readme: fr-mt\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.7, chr-F: 0.466"
] |
translation | transformers |
### opus-mt-fr-niu
* source languages: fr
* target languages: niu
* OPUS readme: [fr-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-niu/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-niu/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-niu/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.niu | 34.5 | 0.537 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-niu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"niu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #niu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-niu
* source languages: fr
* target languages: niu
* OPUS readme: fr-niu
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.5, chr-F: 0.537
| [
"### opus-mt-fr-niu\n\n\n* source languages: fr\n* target languages: niu\n* OPUS readme: fr-niu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.5, chr-F: 0.537"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #niu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-niu\n\n\n* source languages: fr\n* target languages: niu\n* OPUS readme: fr-niu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.5, chr-F: 0.537"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #niu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-niu\n\n\n* source languages: fr\n* target languages: niu\n* OPUS readme: fr-niu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.5, chr-F: 0.537"
] |
translation | transformers |
### fra-nor
* source group: French
* target group: Norwegian
* OPUS readme: [fra-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-nor/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.nor | 36.1 | 0.555 |
### System Info:
- hf_name: fra-nor
- source_languages: fra
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'no']
- src_constituents: {'fra'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-nor/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: nor
- short_pair: fr-no
- chrF2_score: 0.555
- bleu: 36.1
- brevity_penalty: 0.981
- ref_len: 3089.0
- src_name: French
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: no
- prefer_old: False
- long_pair: fra-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["fr", false], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-no | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr",
"no"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### fra-nor
* source group: French
* target group: Norwegian
* OPUS readme: fra-nor
* model: transformer-align
* source language(s): fra
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.1, chr-F: 0.555
### System Info:
* hf\_name: fra-nor
* source\_languages: fra
* target\_languages: nor
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fr', 'no']
* src\_constituents: {'fra'}
* tgt\_constituents: {'nob', 'nno'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: fra
* tgt\_alpha3: nor
* short\_pair: fr-no
* chrF2\_score: 0.555
* bleu: 36.1
* brevity\_penalty: 0.981
* ref\_len: 3089.0
* src\_name: French
* tgt\_name: Norwegian
* train\_date: 2020-06-17
* src\_alpha2: fr
* tgt\_alpha2: no
* prefer\_old: False
* long\_pair: fra-nor
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### fra-nor\n\n\n* source group: French\n* target group: Norwegian\n* OPUS readme: fra-nor\n* model: transformer-align\n* source language(s): fra\n* target language(s): nno nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.555",
"### System Info:\n\n\n* hf\\_name: fra-nor\n* source\\_languages: fra\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'no']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: nor\n* short\\_pair: fr-no\n* chrF2\\_score: 0.555\n* bleu: 36.1\n* brevity\\_penalty: 0.981\n* ref\\_len: 3089.0\n* src\\_name: French\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: fra-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### fra-nor\n\n\n* source group: French\n* target group: Norwegian\n* OPUS readme: fra-nor\n* model: transformer-align\n* source language(s): fra\n* target language(s): nno nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.555",
"### System Info:\n\n\n* hf\\_name: fra-nor\n* source\\_languages: fra\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'no']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: nor\n* short\\_pair: fr-no\n* chrF2\\_score: 0.555\n* bleu: 36.1\n* brevity\\_penalty: 0.981\n* ref\\_len: 3089.0\n* src\\_name: French\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: fra-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
160,
396
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### fra-nor\n\n\n* source group: French\n* target group: Norwegian\n* OPUS readme: fra-nor\n* model: transformer-align\n* source language(s): fra\n* target language(s): nno nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.555### System Info:\n\n\n* hf\\_name: fra-nor\n* source\\_languages: fra\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'no']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: nor\n* short\\_pair: fr-no\n* chrF2\\_score: 0.555\n* bleu: 36.1\n* brevity\\_penalty: 0.981\n* ref\\_len: 3089.0\n* src\\_name: French\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: fra-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-fr-nso
* source languages: fr
* target languages: nso
* OPUS readme: [fr-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-nso/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-nso/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-nso/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.nso | 33.3 | 0.527 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-nso | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"nso",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #nso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-nso
* source languages: fr
* target languages: nso
* OPUS readme: fr-nso
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.3, chr-F: 0.527
| [
"### opus-mt-fr-nso\n\n\n* source languages: fr\n* target languages: nso\n* OPUS readme: fr-nso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.527"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #nso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-nso\n\n\n* source languages: fr\n* target languages: nso\n* OPUS readme: fr-nso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.527"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #nso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-nso\n\n\n* source languages: fr\n* target languages: nso\n* OPUS readme: fr-nso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.527"
] |
translation | transformers |
### opus-mt-fr-ny
* source languages: fr
* target languages: ny
* OPUS readme: [fr-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ny/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ny/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ny/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ny/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ny | 23.2 | 0.481 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ny | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ny",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ny #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ny
* source languages: fr
* target languages: ny
* OPUS readme: fr-ny
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.2, chr-F: 0.481
| [
"### opus-mt-fr-ny\n\n\n* source languages: fr\n* target languages: ny\n* OPUS readme: fr-ny\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.481"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ny #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ny\n\n\n* source languages: fr\n* target languages: ny\n* OPUS readme: fr-ny\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.481"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ny #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ny\n\n\n* source languages: fr\n* target languages: ny\n* OPUS readme: fr-ny\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.481"
] |
translation | transformers |
### opus-mt-fr-pag
* source languages: fr
* target languages: pag
* OPUS readme: [fr-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pag/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pag | 27.0 | 0.486 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-pag | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"fr",
"pag",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #fr #pag #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-pag
* source languages: fr
* target languages: pag
* OPUS readme: fr-pag
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.0, chr-F: 0.486
| [
"### opus-mt-fr-pag\n\n\n* source languages: fr\n* target languages: pag\n* OPUS readme: fr-pag\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.0, chr-F: 0.486"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #fr #pag #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-pag\n\n\n* source languages: fr\n* target languages: pag\n* OPUS readme: fr-pag\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.0, chr-F: 0.486"
] | [
56,
109
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #fr #pag #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-pag\n\n\n* source languages: fr\n* target languages: pag\n* OPUS readme: fr-pag\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.0, chr-F: 0.486"
] |
translation | transformers |
### opus-mt-fr-pap
* source languages: fr
* target languages: pap
* OPUS readme: [fr-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pap/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pap/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pap/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pap | 27.8 | 0.464 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-pap | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"pap",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-pap
* source languages: fr
* target languages: pap
* OPUS readme: fr-pap
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.8, chr-F: 0.464
| [
"### opus-mt-fr-pap\n\n\n* source languages: fr\n* target languages: pap\n* OPUS readme: fr-pap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.464"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-pap\n\n\n* source languages: fr\n* target languages: pap\n* OPUS readme: fr-pap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.464"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-pap\n\n\n* source languages: fr\n* target languages: pap\n* OPUS readme: fr-pap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.464"
] |
translation | transformers |
### opus-mt-fr-pis
* source languages: fr
* target languages: pis
* OPUS readme: [fr-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pis/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pis/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pis | 29.0 | 0.486 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-pis | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"pis",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pis #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-pis
* source languages: fr
* target languages: pis
* OPUS readme: fr-pis
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.0, chr-F: 0.486
| [
"### opus-mt-fr-pis\n\n\n* source languages: fr\n* target languages: pis\n* OPUS readme: fr-pis\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.486"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pis #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-pis\n\n\n* source languages: fr\n* target languages: pis\n* OPUS readme: fr-pis\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.486"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pis #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-pis\n\n\n* source languages: fr\n* target languages: pis\n* OPUS readme: fr-pis\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.486"
] |
translation | transformers |
### opus-mt-fr-pl
* source languages: fr
* target languages: pl
* OPUS readme: [fr-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.pl | 40.7 | 0.625 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-pl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-pl
* source languages: fr
* target languages: pl
* OPUS readme: fr-pl
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.7, chr-F: 0.625
| [
"### opus-mt-fr-pl\n\n\n* source languages: fr\n* target languages: pl\n* OPUS readme: fr-pl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.7, chr-F: 0.625"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-pl\n\n\n* source languages: fr\n* target languages: pl\n* OPUS readme: fr-pl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.7, chr-F: 0.625"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-pl\n\n\n* source languages: fr\n* target languages: pl\n* OPUS readme: fr-pl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.7, chr-F: 0.625"
] |
translation | transformers |
### opus-mt-fr-pon
* source languages: fr
* target languages: pon
* OPUS readme: [fr-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.pon | 23.9 | 0.458 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-pon | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"pon",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pon #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-pon
* source languages: fr
* target languages: pon
* OPUS readme: fr-pon
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.9, chr-F: 0.458
| [
"### opus-mt-fr-pon\n\n\n* source languages: fr\n* target languages: pon\n* OPUS readme: fr-pon\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.9, chr-F: 0.458"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pon #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-pon\n\n\n* source languages: fr\n* target languages: pon\n* OPUS readme: fr-pon\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.9, chr-F: 0.458"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #pon #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-pon\n\n\n* source languages: fr\n* target languages: pon\n* OPUS readme: fr-pon\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.9, chr-F: 0.458"
] |
translation | transformers |
### opus-mt-fr-rnd
* source languages: fr
* target languages: rnd
* OPUS readme: [fr-rnd](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-rnd/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-rnd/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rnd/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rnd/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.rnd | 21.8 | 0.431 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-rnd | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"rnd",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #rnd #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-rnd
* source languages: fr
* target languages: rnd
* OPUS readme: fr-rnd
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.8, chr-F: 0.431
| [
"### opus-mt-fr-rnd\n\n\n* source languages: fr\n* target languages: rnd\n* OPUS readme: fr-rnd\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.431"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #rnd #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-rnd\n\n\n* source languages: fr\n* target languages: rnd\n* OPUS readme: fr-rnd\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.431"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #rnd #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-rnd\n\n\n* source languages: fr\n* target languages: rnd\n* OPUS readme: fr-rnd\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.431"
] |
translation | transformers |
### opus-mt-fr-ro
* source languages: fr
* target languages: ro
* OPUS readme: [fr-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ro/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ro/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ro/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.ro | 42.1 | 0.640 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ro | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ro
* source languages: fr
* target languages: ro
* OPUS readme: fr-ro
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 42.1, chr-F: 0.640
| [
"### opus-mt-fr-ro\n\n\n* source languages: fr\n* target languages: ro\n* OPUS readme: fr-ro\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.640"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ro\n\n\n* source languages: fr\n* target languages: ro\n* OPUS readme: fr-ro\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.640"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ro\n\n\n* source languages: fr\n* target languages: ro\n* OPUS readme: fr-ro\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.640"
] |
translation | transformers |
### opus-mt-fr-ru
* source languages: fr
* target languages: ru
* OPUS readme: [fr-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ru/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.ru | 37.9 | 0.585 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ru
* source languages: fr
* target languages: ru
* OPUS readme: fr-ru
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.9, chr-F: 0.585
| [
"### opus-mt-fr-ru\n\n\n* source languages: fr\n* target languages: ru\n* OPUS readme: fr-ru\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.585"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ru\n\n\n* source languages: fr\n* target languages: ru\n* OPUS readme: fr-ru\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.585"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ru\n\n\n* source languages: fr\n* target languages: ru\n* OPUS readme: fr-ru\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.585"
] |
translation | transformers |
### opus-mt-fr-run
* source languages: fr
* target languages: run
* OPUS readme: [fr-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-run/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-run/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-run/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-run/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.run | 23.8 | 0.482 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-run | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"run",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #run #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-run
* source languages: fr
* target languages: run
* OPUS readme: fr-run
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.8, chr-F: 0.482
| [
"### opus-mt-fr-run\n\n\n* source languages: fr\n* target languages: run\n* OPUS readme: fr-run\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.482"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #run #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-run\n\n\n* source languages: fr\n* target languages: run\n* OPUS readme: fr-run\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.482"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #run #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-run\n\n\n* source languages: fr\n* target languages: run\n* OPUS readme: fr-run\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.482"
] |
translation | transformers |
### opus-mt-fr-rw
* source languages: fr
* target languages: rw
* OPUS readme: [fr-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-rw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-rw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-rw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.rw | 25.5 | 0.483 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-rw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"rw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #rw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-rw
* source languages: fr
* target languages: rw
* OPUS readme: fr-rw
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.5, chr-F: 0.483
| [
"### opus-mt-fr-rw\n\n\n* source languages: fr\n* target languages: rw\n* OPUS readme: fr-rw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.483"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #rw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-rw\n\n\n* source languages: fr\n* target languages: rw\n* OPUS readme: fr-rw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.483"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #rw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-rw\n\n\n* source languages: fr\n* target languages: rw\n* OPUS readme: fr-rw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.483"
] |
translation | transformers |
### opus-mt-fr-sg
* source languages: fr
* target languages: sg
* OPUS readme: [fr-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sg/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sg | 29.7 | 0.473 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-sg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"sg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-sg
* source languages: fr
* target languages: sg
* OPUS readme: fr-sg
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.7, chr-F: 0.473
| [
"### opus-mt-fr-sg\n\n\n* source languages: fr\n* target languages: sg\n* OPUS readme: fr-sg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.473"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-sg\n\n\n* source languages: fr\n* target languages: sg\n* OPUS readme: fr-sg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.473"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-sg\n\n\n* source languages: fr\n* target languages: sg\n* OPUS readme: fr-sg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.473"
] |
translation | transformers |
### opus-mt-fr-sk
* source languages: fr
* target languages: sk
* OPUS readme: [fr-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sk/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sk/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sk/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sk | 24.9 | 0.456 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-sk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"sk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-sk
* source languages: fr
* target languages: sk
* OPUS readme: fr-sk
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.9, chr-F: 0.456
| [
"### opus-mt-fr-sk\n\n\n* source languages: fr\n* target languages: sk\n* OPUS readme: fr-sk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.456"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-sk\n\n\n* source languages: fr\n* target languages: sk\n* OPUS readme: fr-sk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.456"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-sk\n\n\n* source languages: fr\n* target languages: sk\n* OPUS readme: fr-sk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.456"
] |
translation | transformers |
### opus-mt-fr-sl
* source languages: fr
* target languages: sl
* OPUS readme: [fr-sl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sl | 20.1 | 0.433 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-sl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"sl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-sl
* source languages: fr
* target languages: sl
* OPUS readme: fr-sl
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.1, chr-F: 0.433
| [
"### opus-mt-fr-sl\n\n\n* source languages: fr\n* target languages: sl\n* OPUS readme: fr-sl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.1, chr-F: 0.433"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-sl\n\n\n* source languages: fr\n* target languages: sl\n* OPUS readme: fr-sl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.1, chr-F: 0.433"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-sl\n\n\n* source languages: fr\n* target languages: sl\n* OPUS readme: fr-sl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.1, chr-F: 0.433"
] |
translation | transformers |
### opus-mt-fr-sm
* source languages: fr
* target languages: sm
* OPUS readme: [fr-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sm/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sm/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sm | 28.8 | 0.474 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-sm | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"sm",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sm #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-sm
* source languages: fr
* target languages: sm
* OPUS readme: fr-sm
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.8, chr-F: 0.474
| [
"### opus-mt-fr-sm\n\n\n* source languages: fr\n* target languages: sm\n* OPUS readme: fr-sm\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.8, chr-F: 0.474"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sm #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-sm\n\n\n* source languages: fr\n* target languages: sm\n* OPUS readme: fr-sm\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.8, chr-F: 0.474"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sm #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-sm\n\n\n* source languages: fr\n* target languages: sm\n* OPUS readme: fr-sm\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.8, chr-F: 0.474"
] |
translation | transformers |
### opus-mt-fr-sn
* source languages: fr
* target languages: sn
* OPUS readme: [fr-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.sn | 23.4 | 0.507 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-sn | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"sn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-sn
* source languages: fr
* target languages: sn
* OPUS readme: fr-sn
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.4, chr-F: 0.507
| [
"### opus-mt-fr-sn\n\n\n* source languages: fr\n* target languages: sn\n* OPUS readme: fr-sn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.4, chr-F: 0.507"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-sn\n\n\n* source languages: fr\n* target languages: sn\n* OPUS readme: fr-sn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.4, chr-F: 0.507"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-sn\n\n\n* source languages: fr\n* target languages: sn\n* OPUS readme: fr-sn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.4, chr-F: 0.507"
] |
translation | transformers |
### opus-mt-fr-srn
* source languages: fr
* target languages: srn
* OPUS readme: [fr-srn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-srn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-srn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.srn | 27.4 | 0.459 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-srn | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"srn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #srn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-srn
* source languages: fr
* target languages: srn
* OPUS readme: fr-srn
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.4, chr-F: 0.459
| [
"### opus-mt-fr-srn\n\n\n* source languages: fr\n* target languages: srn\n* OPUS readme: fr-srn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.459"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #srn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-srn\n\n\n* source languages: fr\n* target languages: srn\n* OPUS readme: fr-srn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.459"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #srn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-srn\n\n\n* source languages: fr\n* target languages: srn\n* OPUS readme: fr-srn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.459"
] |
translation | transformers |
### opus-mt-fr-st
* source languages: fr
* target languages: st
* OPUS readme: [fr-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-st/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-st/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.st | 34.6 | 0.540 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-st | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"st",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #st #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-st
* source languages: fr
* target languages: st
* OPUS readme: fr-st
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.6, chr-F: 0.540
| [
"### opus-mt-fr-st\n\n\n* source languages: fr\n* target languages: st\n* OPUS readme: fr-st\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.6, chr-F: 0.540"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #st #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-st\n\n\n* source languages: fr\n* target languages: st\n* OPUS readme: fr-st\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.6, chr-F: 0.540"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #st #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-st\n\n\n* source languages: fr\n* target languages: st\n* OPUS readme: fr-st\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.6, chr-F: 0.540"
] |
translation | transformers |
### opus-mt-fr-sv
* source languages: fr
* target languages: sv
* OPUS readme: [fr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sv/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.sv | 60.1 | 0.744 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-sv
* source languages: fr
* target languages: sv
* OPUS readme: fr-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 60.1, chr-F: 0.744
| [
"### opus-mt-fr-sv\n\n\n* source languages: fr\n* target languages: sv\n* OPUS readme: fr-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.1, chr-F: 0.744"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-sv\n\n\n* source languages: fr\n* target languages: sv\n* OPUS readme: fr-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.1, chr-F: 0.744"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-sv\n\n\n* source languages: fr\n* target languages: sv\n* OPUS readme: fr-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.1, chr-F: 0.744"
] |
translation | transformers |
### opus-mt-fr-swc
* source languages: fr
* target languages: swc
* OPUS readme: [fr-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-swc/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-swc/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.swc | 28.2 | 0.499 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-swc | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"swc",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #swc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-swc
* source languages: fr
* target languages: swc
* OPUS readme: fr-swc
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.2, chr-F: 0.499
| [
"### opus-mt-fr-swc\n\n\n* source languages: fr\n* target languages: swc\n* OPUS readme: fr-swc\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.2, chr-F: 0.499"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #swc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-swc\n\n\n* source languages: fr\n* target languages: swc\n* OPUS readme: fr-swc\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.2, chr-F: 0.499"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #swc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-swc\n\n\n* source languages: fr\n* target languages: swc\n* OPUS readme: fr-swc\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.2, chr-F: 0.499"
] |
translation | transformers |
### opus-mt-fr-tiv
* source languages: fr
* target languages: tiv
* OPUS readme: [fr-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tiv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tiv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tiv | 23.5 | 0.406 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-tiv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"tiv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tiv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-tiv
* source languages: fr
* target languages: tiv
* OPUS readme: fr-tiv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.5, chr-F: 0.406
| [
"### opus-mt-fr-tiv\n\n\n* source languages: fr\n* target languages: tiv\n* OPUS readme: fr-tiv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.406"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tiv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-tiv\n\n\n* source languages: fr\n* target languages: tiv\n* OPUS readme: fr-tiv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.406"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tiv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-tiv\n\n\n* source languages: fr\n* target languages: tiv\n* OPUS readme: fr-tiv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.406"
] |
translation | transformers |
### fra-tgl
* source group: French
* target group: Tagalog
* OPUS readme: [fra-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-tgl/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): tgl_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.tgl | 24.1 | 0.536 |
### System Info:
- hf_name: fra-tgl
- source_languages: fra
- target_languages: tgl
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-tgl/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'tl']
- src_constituents: {'fra'}
- tgt_constituents: {'tgl_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-tgl/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: tgl
- short_pair: fr-tl
- chrF2_score: 0.536
- bleu: 24.1
- brevity_penalty: 1.0
- ref_len: 5778.0
- src_name: French
- tgt_name: Tagalog
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: tl
- prefer_old: False
- long_pair: fra-tgl
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["fr", "tl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-tl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"tl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr",
"tl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### fra-tgl
* source group: French
* target group: Tagalog
* OPUS readme: fra-tgl
* model: transformer-align
* source language(s): fra
* target language(s): tgl\_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.1, chr-F: 0.536
### System Info:
* hf\_name: fra-tgl
* source\_languages: fra
* target\_languages: tgl
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fr', 'tl']
* src\_constituents: {'fra'}
* tgt\_constituents: {'tgl\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: fra
* tgt\_alpha3: tgl
* short\_pair: fr-tl
* chrF2\_score: 0.536
* bleu: 24.1
* brevity\_penalty: 1.0
* ref\_len: 5778.0
* src\_name: French
* tgt\_name: Tagalog
* train\_date: 2020-06-17
* src\_alpha2: fr
* tgt\_alpha2: tl
* prefer\_old: False
* long\_pair: fra-tgl
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### fra-tgl\n\n\n* source group: French\n* target group: Tagalog\n* OPUS readme: fra-tgl\n* model: transformer-align\n* source language(s): fra\n* target language(s): tgl\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.536",
"### System Info:\n\n\n* hf\\_name: fra-tgl\n* source\\_languages: fra\n* target\\_languages: tgl\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'tl']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'tgl\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: tgl\n* short\\_pair: fr-tl\n* chrF2\\_score: 0.536\n* bleu: 24.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 5778.0\n* src\\_name: French\n* tgt\\_name: Tagalog\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: tl\n* prefer\\_old: False\n* long\\_pair: fra-tgl\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### fra-tgl\n\n\n* source group: French\n* target group: Tagalog\n* OPUS readme: fra-tgl\n* model: transformer-align\n* source language(s): fra\n* target language(s): tgl\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.536",
"### System Info:\n\n\n* hf\\_name: fra-tgl\n* source\\_languages: fra\n* target\\_languages: tgl\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'tl']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'tgl\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: tgl\n* short\\_pair: fr-tl\n* chrF2\\_score: 0.536\n* bleu: 24.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 5778.0\n* src\\_name: French\n* tgt\\_name: Tagalog\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: tl\n* prefer\\_old: False\n* long\\_pair: fra-tgl\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
141,
406
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### fra-tgl\n\n\n* source group: French\n* target group: Tagalog\n* OPUS readme: fra-tgl\n* model: transformer-align\n* source language(s): fra\n* target language(s): tgl\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.536### System Info:\n\n\n* hf\\_name: fra-tgl\n* source\\_languages: fra\n* target\\_languages: tgl\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'tl']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'tgl\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: tgl\n* short\\_pair: fr-tl\n* chrF2\\_score: 0.536\n* bleu: 24.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 5778.0\n* src\\_name: French\n* tgt\\_name: Tagalog\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: tl\n* prefer\\_old: False\n* long\\_pair: fra-tgl\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-fr-tll
* source languages: fr
* target languages: tll
* OPUS readme: [fr-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tll/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tll | 24.6 | 0.467 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-tll | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"tll",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tll #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-tll
* source languages: fr
* target languages: tll
* OPUS readme: fr-tll
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.6, chr-F: 0.467
| [
"### opus-mt-fr-tll\n\n\n* source languages: fr\n* target languages: tll\n* OPUS readme: fr-tll\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.467"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tll #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-tll\n\n\n* source languages: fr\n* target languages: tll\n* OPUS readme: fr-tll\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.467"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tll #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-tll\n\n\n* source languages: fr\n* target languages: tll\n* OPUS readme: fr-tll\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.467"
] |
translation | transformers |
### opus-mt-fr-tn
* source languages: fr
* target languages: tn
* OPUS readme: [fr-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tn/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tn/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tn/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tn | 33.1 | 0.525 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-tn | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"tn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-tn
* source languages: fr
* target languages: tn
* OPUS readme: fr-tn
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.1, chr-F: 0.525
| [
"### opus-mt-fr-tn\n\n\n* source languages: fr\n* target languages: tn\n* OPUS readme: fr-tn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.525"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-tn\n\n\n* source languages: fr\n* target languages: tn\n* OPUS readme: fr-tn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.525"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-tn\n\n\n* source languages: fr\n* target languages: tn\n* OPUS readme: fr-tn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.525"
] |
translation | transformers |
### opus-mt-fr-to
* source languages: fr
* target languages: to
* OPUS readme: [fr-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-to/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-to/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.to | 37.0 | 0.518 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-to | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"to",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #to #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-to
* source languages: fr
* target languages: to
* OPUS readme: fr-to
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.0, chr-F: 0.518
| [
"### opus-mt-fr-to\n\n\n* source languages: fr\n* target languages: to\n* OPUS readme: fr-to\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.518"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #to #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-to\n\n\n* source languages: fr\n* target languages: to\n* OPUS readme: fr-to\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.518"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #to #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-to\n\n\n* source languages: fr\n* target languages: to\n* OPUS readme: fr-to\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.518"
] |
translation | transformers |
### opus-mt-fr-tpi
* source languages: fr
* target languages: tpi
* OPUS readme: [fr-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tpi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tpi | 30.0 | 0.487 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-tpi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"tpi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tpi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-tpi
* source languages: fr
* target languages: tpi
* OPUS readme: fr-tpi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.0, chr-F: 0.487
| [
"### opus-mt-fr-tpi\n\n\n* source languages: fr\n* target languages: tpi\n* OPUS readme: fr-tpi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.487"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tpi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-tpi\n\n\n* source languages: fr\n* target languages: tpi\n* OPUS readme: fr-tpi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.487"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tpi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-tpi\n\n\n* source languages: fr\n* target languages: tpi\n* OPUS readme: fr-tpi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.487"
] |
translation | transformers |
### opus-mt-fr-ts
* source languages: fr
* target languages: ts
* OPUS readme: [fr-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ts/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ts/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ts/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ts/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ts | 31.4 | 0.525 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ts | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ts
* source languages: fr
* target languages: ts
* OPUS readme: fr-ts
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.4, chr-F: 0.525
| [
"### opus-mt-fr-ts\n\n\n* source languages: fr\n* target languages: ts\n* OPUS readme: fr-ts\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.525"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ts\n\n\n* source languages: fr\n* target languages: ts\n* OPUS readme: fr-ts\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.525"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ts\n\n\n* source languages: fr\n* target languages: ts\n* OPUS readme: fr-ts\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.525"
] |
translation | transformers |
### opus-mt-fr-tum
* source languages: fr
* target languages: tum
* OPUS readme: [fr-tum](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tum/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tum/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tum/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tum/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tum | 23.0 | 0.458 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-tum | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"tum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-tum
* source languages: fr
* target languages: tum
* OPUS readme: fr-tum
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.0, chr-F: 0.458
| [
"### opus-mt-fr-tum\n\n\n* source languages: fr\n* target languages: tum\n* OPUS readme: fr-tum\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.0, chr-F: 0.458"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-tum\n\n\n* source languages: fr\n* target languages: tum\n* OPUS readme: fr-tum\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.0, chr-F: 0.458"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-tum\n\n\n* source languages: fr\n* target languages: tum\n* OPUS readme: fr-tum\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.0, chr-F: 0.458"
] |
translation | transformers |
### opus-mt-fr-tvl
* source languages: fr
* target languages: tvl
* OPUS readme: [fr-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tvl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tvl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tvl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tvl | 32.6 | 0.497 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-tvl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"tvl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tvl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-tvl
* source languages: fr
* target languages: tvl
* OPUS readme: fr-tvl
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.6, chr-F: 0.497
| [
"### opus-mt-fr-tvl\n\n\n* source languages: fr\n* target languages: tvl\n* OPUS readme: fr-tvl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.497"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tvl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-tvl\n\n\n* source languages: fr\n* target languages: tvl\n* OPUS readme: fr-tvl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.497"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tvl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-tvl\n\n\n* source languages: fr\n* target languages: tvl\n* OPUS readme: fr-tvl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.497"
] |
translation | transformers |
### opus-mt-fr-tw
* source languages: fr
* target languages: tw
* OPUS readme: [fr-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-tw/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tw/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-tw/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.tw | 27.9 | 0.469 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-tw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"tw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-tw
* source languages: fr
* target languages: tw
* OPUS readme: fr-tw
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.9, chr-F: 0.469
| [
"### opus-mt-fr-tw\n\n\n* source languages: fr\n* target languages: tw\n* OPUS readme: fr-tw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.469"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-tw\n\n\n* source languages: fr\n* target languages: tw\n* OPUS readme: fr-tw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.469"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #tw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-tw\n\n\n* source languages: fr\n* target languages: tw\n* OPUS readme: fr-tw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.469"
] |
translation | transformers |
### opus-mt-fr-ty
* source languages: fr
* target languages: ty
* OPUS readme: [fr-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ty/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ty | 39.6 | 0.561 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ty | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ty",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ty #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ty
* source languages: fr
* target languages: ty
* OPUS readme: fr-ty
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.6, chr-F: 0.561
| [
"### opus-mt-fr-ty\n\n\n* source languages: fr\n* target languages: ty\n* OPUS readme: fr-ty\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.6, chr-F: 0.561"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ty #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ty\n\n\n* source languages: fr\n* target languages: ty\n* OPUS readme: fr-ty\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.6, chr-F: 0.561"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ty #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ty\n\n\n* source languages: fr\n* target languages: ty\n* OPUS readme: fr-ty\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.6, chr-F: 0.561"
] |
translation | transformers |
### opus-mt-fr-uk
* source languages: fr
* target languages: uk
* OPUS readme: [fr-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-uk/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fr.uk | 39.4 | 0.581 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-uk
* source languages: fr
* target languages: uk
* OPUS readme: fr-uk
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 39.4, chr-F: 0.581
| [
"### opus-mt-fr-uk\n\n\n* source languages: fr\n* target languages: uk\n* OPUS readme: fr-uk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.4, chr-F: 0.581"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-uk\n\n\n* source languages: fr\n* target languages: uk\n* OPUS readme: fr-uk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.4, chr-F: 0.581"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-uk\n\n\n* source languages: fr\n* target languages: uk\n* OPUS readme: fr-uk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.4, chr-F: 0.581"
] |
translation | transformers |
### opus-mt-fr-ve
* source languages: fr
* target languages: ve
* OPUS readme: [fr-ve](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ve/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ve/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ve/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ve/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.ve | 26.3 | 0.481 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-ve | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"ve",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ve #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-ve
* source languages: fr
* target languages: ve
* OPUS readme: fr-ve
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.3, chr-F: 0.481
| [
"### opus-mt-fr-ve\n\n\n* source languages: fr\n* target languages: ve\n* OPUS readme: fr-ve\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.481"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ve #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-ve\n\n\n* source languages: fr\n* target languages: ve\n* OPUS readme: fr-ve\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.481"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #ve #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-ve\n\n\n* source languages: fr\n* target languages: ve\n* OPUS readme: fr-ve\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.481"
] |
translation | transformers |
### fra-vie
* source group: French
* target group: Vietnamese
* OPUS readme: [fra-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.vie | 31.1 | 0.486 |
### System Info:
- hf_name: fra-vie
- source_languages: fra
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'vi']
- src_constituents: {'fra'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-vie/opus-2020-06-17.test.txt
- src_alpha3: fra
- tgt_alpha3: vie
- short_pair: fr-vi
- chrF2_score: 0.486
- bleu: 31.1
- brevity_penalty: 0.985
- ref_len: 13219.0
- src_name: French
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: fr
- tgt_alpha2: vi
- prefer_old: False
- long_pair: fra-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["fr", "vi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-vi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr",
"vi"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### fra-vie
* source group: French
* target group: Vietnamese
* OPUS readme: fra-vie
* model: transformer-align
* source language(s): fra
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.1, chr-F: 0.486
### System Info:
* hf\_name: fra-vie
* source\_languages: fra
* target\_languages: vie
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fr', 'vi']
* src\_constituents: {'fra'}
* tgt\_constituents: {'vie', 'vie\_Hani'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: fra
* tgt\_alpha3: vie
* short\_pair: fr-vi
* chrF2\_score: 0.486
* bleu: 31.1
* brevity\_penalty: 0.985
* ref\_len: 13219.0
* src\_name: French
* tgt\_name: Vietnamese
* train\_date: 2020-06-17
* src\_alpha2: fr
* tgt\_alpha2: vi
* prefer\_old: False
* long\_pair: fra-vie
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### fra-vie\n\n\n* source group: French\n* target group: Vietnamese\n* OPUS readme: fra-vie\n* model: transformer-align\n* source language(s): fra\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.1, chr-F: 0.486",
"### System Info:\n\n\n* hf\\_name: fra-vie\n* source\\_languages: fra\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'vi']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: vie\n* short\\_pair: fr-vi\n* chrF2\\_score: 0.486\n* bleu: 31.1\n* brevity\\_penalty: 0.985\n* ref\\_len: 13219.0\n* src\\_name: French\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: fra-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### fra-vie\n\n\n* source group: French\n* target group: Vietnamese\n* OPUS readme: fra-vie\n* model: transformer-align\n* source language(s): fra\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.1, chr-F: 0.486",
"### System Info:\n\n\n* hf\\_name: fra-vie\n* source\\_languages: fra\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'vi']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: vie\n* short\\_pair: fr-vi\n* chrF2\\_score: 0.486\n* bleu: 31.1\n* brevity\\_penalty: 0.985\n* ref\\_len: 13219.0\n* src\\_name: French\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: fra-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
131,
399
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### fra-vie\n\n\n* source group: French\n* target group: Vietnamese\n* OPUS readme: fra-vie\n* model: transformer-align\n* source language(s): fra\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.1, chr-F: 0.486### System Info:\n\n\n* hf\\_name: fra-vie\n* source\\_languages: fra\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'vi']\n* src\\_constituents: {'fra'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: vie\n* short\\_pair: fr-vi\n* chrF2\\_score: 0.486\n* bleu: 31.1\n* brevity\\_penalty: 0.985\n* ref\\_len: 13219.0\n* src\\_name: French\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: fr\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: fra-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-fr-war
* source languages: fr
* target languages: war
* OPUS readme: [fr-war](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-war/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-war/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.war | 33.7 | 0.538 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-war | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"war",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #war #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-war
* source languages: fr
* target languages: war
* OPUS readme: fr-war
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.7, chr-F: 0.538
| [
"### opus-mt-fr-war\n\n\n* source languages: fr\n* target languages: war\n* OPUS readme: fr-war\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.7, chr-F: 0.538"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #war #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-war\n\n\n* source languages: fr\n* target languages: war\n* OPUS readme: fr-war\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.7, chr-F: 0.538"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #war #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-war\n\n\n* source languages: fr\n* target languages: war\n* OPUS readme: fr-war\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.7, chr-F: 0.538"
] |
translation | transformers |
### opus-mt-fr-wls
* source languages: fr
* target languages: wls
* OPUS readme: [fr-wls](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-wls/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-wls/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.wls | 27.5 | 0.478 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-wls | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"wls",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #wls #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-wls
* source languages: fr
* target languages: wls
* OPUS readme: fr-wls
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.5, chr-F: 0.478
| [
"### opus-mt-fr-wls\n\n\n* source languages: fr\n* target languages: wls\n* OPUS readme: fr-wls\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.478"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #wls #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-wls\n\n\n* source languages: fr\n* target languages: wls\n* OPUS readme: fr-wls\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.478"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #wls #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-wls\n\n\n* source languages: fr\n* target languages: wls\n* OPUS readme: fr-wls\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.478"
] |
translation | transformers |
### opus-mt-fr-xh
* source languages: fr
* target languages: xh
* OPUS readme: [fr-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-xh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-xh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-xh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.xh | 25.1 | 0.523 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-xh | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"xh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #xh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-xh
* source languages: fr
* target languages: xh
* OPUS readme: fr-xh
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.1, chr-F: 0.523
| [
"### opus-mt-fr-xh\n\n\n* source languages: fr\n* target languages: xh\n* OPUS readme: fr-xh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.1, chr-F: 0.523"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #xh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-xh\n\n\n* source languages: fr\n* target languages: xh\n* OPUS readme: fr-xh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.1, chr-F: 0.523"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #xh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-xh\n\n\n* source languages: fr\n* target languages: xh\n* OPUS readme: fr-xh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.1, chr-F: 0.523"
] |
translation | transformers |
### opus-mt-fr-yap
* source languages: fr
* target languages: yap
* OPUS readme: [fr-yap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-yap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yap/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.yap | 25.8 | 0.434 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-yap | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"yap",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #yap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-yap
* source languages: fr
* target languages: yap
* OPUS readme: fr-yap
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.8, chr-F: 0.434
| [
"### opus-mt-fr-yap\n\n\n* source languages: fr\n* target languages: yap\n* OPUS readme: fr-yap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.8, chr-F: 0.434"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #yap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-yap\n\n\n* source languages: fr\n* target languages: yap\n* OPUS readme: fr-yap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.8, chr-F: 0.434"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #yap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-yap\n\n\n* source languages: fr\n* target languages: yap\n* OPUS readme: fr-yap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.8, chr-F: 0.434"
] |
translation | transformers |
### opus-mt-fr-yo
* source languages: fr
* target languages: yo
* OPUS readme: [fr-yo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-yo/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-yo/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.yo | 25.9 | 0.415 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-yo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"yo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #yo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-yo
* source languages: fr
* target languages: yo
* OPUS readme: fr-yo
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.9, chr-F: 0.415
| [
"### opus-mt-fr-yo\n\n\n* source languages: fr\n* target languages: yo\n* OPUS readme: fr-yo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.415"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #yo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-yo\n\n\n* source languages: fr\n* target languages: yo\n* OPUS readme: fr-yo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.415"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #yo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-yo\n\n\n* source languages: fr\n* target languages: yo\n* OPUS readme: fr-yo\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.415"
] |
translation | transformers |
### opus-mt-fr-zne
* source languages: fr
* target languages: zne
* OPUS readme: [fr-zne](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-zne/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-zne/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.zne | 24.1 | 0.460 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fr-zne | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"zne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #zne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fr-zne
* source languages: fr
* target languages: zne
* OPUS readme: fr-zne
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.1, chr-F: 0.460
| [
"### opus-mt-fr-zne\n\n\n* source languages: fr\n* target languages: zne\n* OPUS readme: fr-zne\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.460"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #zne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fr-zne\n\n\n* source languages: fr\n* target languages: zne\n* OPUS readme: fr-zne\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.460"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #zne #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fr-zne\n\n\n* source languages: fr\n* target languages: zne\n* OPUS readme: fr-zne\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.460"
] |
translation | transformers |
### opus-mt-fse-fi
* source languages: fse
* target languages: fi
* OPUS readme: [fse-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fse-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fse-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fse.fi | 90.2 | 0.943 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-fse-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fse",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fse #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-fse-fi
* source languages: fse
* target languages: fi
* OPUS readme: fse-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 90.2, chr-F: 0.943
| [
"### opus-mt-fse-fi\n\n\n* source languages: fse\n* target languages: fi\n* OPUS readme: fse-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 90.2, chr-F: 0.943"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fse #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-fse-fi\n\n\n* source languages: fse\n* target languages: fi\n* OPUS readme: fse-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 90.2, chr-F: 0.943"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fse #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-fse-fi\n\n\n* source languages: fse\n* target languages: fi\n* OPUS readme: fse-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 90.2, chr-F: 0.943"
] |
translation | transformers |
### gle-eng
* source group: Irish
* target group: English
* OPUS readme: [gle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md)
* model: transformer-align
* source language(s): gle
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.gle.eng | 51.6 | 0.672 |
### System Info:
- hf_name: gle-eng
- source_languages: gle
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gle-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ga', 'en']
- src_constituents: {'gle'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gle-eng/opus-2020-06-17.test.txt
- src_alpha3: gle
- tgt_alpha3: eng
- short_pair: ga-en
- chrF2_score: 0.672
- bleu: 51.6
- brevity_penalty: 1.0
- ref_len: 11247.0
- src_name: Irish
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ga
- tgt_alpha2: en
- prefer_old: False
- long_pair: gle-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ga", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ga-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ga",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ga",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ga #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### gle-eng
* source group: Irish
* target group: English
* OPUS readme: gle-eng
* model: transformer-align
* source language(s): gle
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 51.6, chr-F: 0.672
### System Info:
* hf\_name: gle-eng
* source\_languages: gle
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ga', 'en']
* src\_constituents: {'gle'}
* tgt\_constituents: {'eng'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: gle
* tgt\_alpha3: eng
* short\_pair: ga-en
* chrF2\_score: 0.672
* bleu: 51.6
* brevity\_penalty: 1.0
* ref\_len: 11247.0
* src\_name: Irish
* tgt\_name: English
* train\_date: 2020-06-17
* src\_alpha2: ga
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: gle-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### gle-eng\n\n\n* source group: Irish\n* target group: English\n* OPUS readme: gle-eng\n* model: transformer-align\n* source language(s): gle\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.6, chr-F: 0.672",
"### System Info:\n\n\n* hf\\_name: gle-eng\n* source\\_languages: gle\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ga', 'en']\n* src\\_constituents: {'gle'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gle\n* tgt\\_alpha3: eng\n* short\\_pair: ga-en\n* chrF2\\_score: 0.672\n* bleu: 51.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 11247.0\n* src\\_name: Irish\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ga\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gle-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ga #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### gle-eng\n\n\n* source group: Irish\n* target group: English\n* OPUS readme: gle-eng\n* model: transformer-align\n* source language(s): gle\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.6, chr-F: 0.672",
"### System Info:\n\n\n* hf\\_name: gle-eng\n* source\\_languages: gle\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ga', 'en']\n* src\\_constituents: {'gle'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gle\n* tgt\\_alpha3: eng\n* short\\_pair: ga-en\n* chrF2\\_score: 0.672\n* bleu: 51.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 11247.0\n* src\\_name: Irish\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ga\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gle-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
395
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ga #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### gle-eng\n\n\n* source group: Irish\n* target group: English\n* OPUS readme: gle-eng\n* model: transformer-align\n* source language(s): gle\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.6, chr-F: 0.672### System Info:\n\n\n* hf\\_name: gle-eng\n* source\\_languages: gle\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ga', 'en']\n* src\\_constituents: {'gle'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gle\n* tgt\\_alpha3: eng\n* short\\_pair: ga-en\n* chrF2\\_score: 0.672\n* bleu: 51.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 11247.0\n* src\\_name: Irish\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ga\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gle-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-gaa-de
* source languages: gaa
* target languages: de
* OPUS readme: [gaa-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.de | 23.3 | 0.438 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gaa-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gaa",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gaa-de
* source languages: gaa
* target languages: de
* OPUS readme: gaa-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.3, chr-F: 0.438
| [
"### opus-mt-gaa-de\n\n\n* source languages: gaa\n* target languages: de\n* OPUS readme: gaa-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.438"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gaa-de\n\n\n* source languages: gaa\n* target languages: de\n* OPUS readme: gaa-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.438"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gaa-de\n\n\n* source languages: gaa\n* target languages: de\n* OPUS readme: gaa-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.438"
] |
translation | transformers |
### opus-mt-gaa-en
* source languages: gaa
* target languages: en
* OPUS readme: [gaa-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.en | 41.0 | 0.567 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gaa-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gaa",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gaa-en
* source languages: gaa
* target languages: en
* OPUS readme: gaa-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 41.0, chr-F: 0.567
| [
"### opus-mt-gaa-en\n\n\n* source languages: gaa\n* target languages: en\n* OPUS readme: gaa-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.0, chr-F: 0.567"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gaa-en\n\n\n* source languages: gaa\n* target languages: en\n* OPUS readme: gaa-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.0, chr-F: 0.567"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gaa-en\n\n\n* source languages: gaa\n* target languages: en\n* OPUS readme: gaa-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.0, chr-F: 0.567"
] |
translation | transformers |
### opus-mt-gaa-es
* source languages: gaa
* target languages: es
* OPUS readme: [gaa-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.es | 28.6 | 0.463 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gaa-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gaa",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gaa-es
* source languages: gaa
* target languages: es
* OPUS readme: gaa-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.6, chr-F: 0.463
| [
"### opus-mt-gaa-es\n\n\n* source languages: gaa\n* target languages: es\n* OPUS readme: gaa-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.463"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gaa-es\n\n\n* source languages: gaa\n* target languages: es\n* OPUS readme: gaa-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.463"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gaa-es\n\n\n* source languages: gaa\n* target languages: es\n* OPUS readme: gaa-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.463"
] |
translation | transformers |
### opus-mt-gaa-fi
* source languages: gaa
* target languages: fi
* OPUS readme: [gaa-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.fi | 26.4 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gaa-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gaa",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gaa-fi
* source languages: gaa
* target languages: fi
* OPUS readme: gaa-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.4, chr-F: 0.498
| [
"### opus-mt-gaa-fi\n\n\n* source languages: gaa\n* target languages: fi\n* OPUS readme: gaa-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.498"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gaa-fi\n\n\n* source languages: gaa\n* target languages: fi\n* OPUS readme: gaa-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.498"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gaa-fi\n\n\n* source languages: gaa\n* target languages: fi\n* OPUS readme: gaa-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.498"
] |
translation | transformers |
### opus-mt-gaa-fr
* source languages: gaa
* target languages: fr
* OPUS readme: [gaa-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.fr | 27.8 | 0.455 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gaa-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gaa",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gaa-fr
* source languages: gaa
* target languages: fr
* OPUS readme: gaa-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.8, chr-F: 0.455
| [
"### opus-mt-gaa-fr\n\n\n* source languages: gaa\n* target languages: fr\n* OPUS readme: gaa-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.455"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gaa-fr\n\n\n* source languages: gaa\n* target languages: fr\n* OPUS readme: gaa-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.455"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gaa-fr\n\n\n* source languages: gaa\n* target languages: fr\n* OPUS readme: gaa-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.455"
] |
translation | transformers |
### opus-mt-gaa-sv
* source languages: gaa
* target languages: sv
* OPUS readme: [gaa-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gaa-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gaa-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gaa.sv | 30.1 | 0.489 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gaa-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gaa",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gaa-sv
* source languages: gaa
* target languages: sv
* OPUS readme: gaa-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.1, chr-F: 0.489
| [
"### opus-mt-gaa-sv\n\n\n* source languages: gaa\n* target languages: sv\n* OPUS readme: gaa-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.489"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gaa-sv\n\n\n* source languages: gaa\n* target languages: sv\n* OPUS readme: gaa-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.489"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gaa #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gaa-sv\n\n\n* source languages: gaa\n* target languages: sv\n* OPUS readme: gaa-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.489"
] |
translation | transformers |
### gem-eng
* source group: Germanic languages
* target group: English
* OPUS readme: [gem-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md)
* model: transformer
* source language(s): afr ang_Latn dan deu enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 27.2 | 0.542 |
| news-test2008-deueng.deu.eng | 26.3 | 0.536 |
| newstest2009-deueng.deu.eng | 25.1 | 0.531 |
| newstest2010-deueng.deu.eng | 28.3 | 0.569 |
| newstest2011-deueng.deu.eng | 26.0 | 0.543 |
| newstest2012-deueng.deu.eng | 26.8 | 0.550 |
| newstest2013-deueng.deu.eng | 30.2 | 0.570 |
| newstest2014-deen-deueng.deu.eng | 30.7 | 0.574 |
| newstest2015-ende-deueng.deu.eng | 32.1 | 0.581 |
| newstest2016-ende-deueng.deu.eng | 36.9 | 0.624 |
| newstest2017-ende-deueng.deu.eng | 32.8 | 0.588 |
| newstest2018-ende-deueng.deu.eng | 40.2 | 0.640 |
| newstest2019-deen-deueng.deu.eng | 36.8 | 0.614 |
| Tatoeba-test.afr-eng.afr.eng | 62.8 | 0.758 |
| Tatoeba-test.ang-eng.ang.eng | 10.5 | 0.262 |
| Tatoeba-test.dan-eng.dan.eng | 61.6 | 0.754 |
| Tatoeba-test.deu-eng.deu.eng | 49.7 | 0.665 |
| Tatoeba-test.enm-eng.enm.eng | 23.9 | 0.491 |
| Tatoeba-test.fao-eng.fao.eng | 23.4 | 0.446 |
| Tatoeba-test.frr-eng.frr.eng | 10.2 | 0.184 |
| Tatoeba-test.fry-eng.fry.eng | 29.6 | 0.486 |
| Tatoeba-test.gos-eng.gos.eng | 17.8 | 0.352 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.058 |
| Tatoeba-test.gsw-eng.gsw.eng | 15.3 | 0.333 |
| Tatoeba-test.isl-eng.isl.eng | 51.0 | 0.669 |
| Tatoeba-test.ksh-eng.ksh.eng | 6.7 | 0.266 |
| Tatoeba-test.ltz-eng.ltz.eng | 33.0 | 0.505 |
| Tatoeba-test.multi.eng | 54.0 | 0.687 |
| Tatoeba-test.nds-eng.nds.eng | 33.6 | 0.529 |
| Tatoeba-test.nld-eng.nld.eng | 58.9 | 0.733 |
| Tatoeba-test.non-eng.non.eng | 37.3 | 0.546 |
| Tatoeba-test.nor-eng.nor.eng | 54.9 | 0.696 |
| Tatoeba-test.pdc-eng.pdc.eng | 29.6 | 0.446 |
| Tatoeba-test.sco-eng.sco.eng | 40.5 | 0.581 |
| Tatoeba-test.stq-eng.stq.eng | 14.5 | 0.361 |
| Tatoeba-test.swe-eng.swe.eng | 62.0 | 0.745 |
| Tatoeba-test.swg-eng.swg.eng | 17.1 | 0.334 |
| Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.400 |
### System Info:
- hf_name: gem-eng
- source_languages: gem
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
- src_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-eng/opus2m-2020-08-01.test.txt
- src_alpha3: gem
- tgt_alpha3: eng
- short_pair: gem-en
- chrF2_score: 0.687
- bleu: 54.0
- brevity_penalty: 0.993
- ref_len: 72120.0
- src_name: Germanic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: gem
- tgt_alpha2: en
- prefer_old: False
- long_pair: gem-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["da", "sv", "af", "nn", "fy", "fo", "de", "nb", "nl", "is", "en", "lb", "yi", "gem"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gem-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"en",
"lb",
"yi",
"gem",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"en",
"lb",
"yi",
"gem"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #da #sv #af #nn #fy #fo #de #nb #nl #is #en #lb #yi #gem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### gem-eng
* source group: Germanic languages
* target group: English
* OPUS readme: gem-eng
* model: transformer
* source language(s): afr ang\_Latn dan deu enm\_Latn fao frr fry gos got\_Goth gsw isl ksh ltz nds nld nno nob nob\_Hebr non\_Latn pdc sco stq swe swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.2, chr-F: 0.542
testset: URL, BLEU: 26.3, chr-F: 0.536
testset: URL, BLEU: 25.1, chr-F: 0.531
testset: URL, BLEU: 28.3, chr-F: 0.569
testset: URL, BLEU: 26.0, chr-F: 0.543
testset: URL, BLEU: 26.8, chr-F: 0.550
testset: URL, BLEU: 30.2, chr-F: 0.570
testset: URL, BLEU: 30.7, chr-F: 0.574
testset: URL, BLEU: 32.1, chr-F: 0.581
testset: URL, BLEU: 36.9, chr-F: 0.624
testset: URL, BLEU: 32.8, chr-F: 0.588
testset: URL, BLEU: 40.2, chr-F: 0.640
testset: URL, BLEU: 36.8, chr-F: 0.614
testset: URL, BLEU: 62.8, chr-F: 0.758
testset: URL, BLEU: 10.5, chr-F: 0.262
testset: URL, BLEU: 61.6, chr-F: 0.754
testset: URL, BLEU: 49.7, chr-F: 0.665
testset: URL, BLEU: 23.9, chr-F: 0.491
testset: URL, BLEU: 23.4, chr-F: 0.446
testset: URL, BLEU: 10.2, chr-F: 0.184
testset: URL, BLEU: 29.6, chr-F: 0.486
testset: URL, BLEU: 17.8, chr-F: 0.352
testset: URL, BLEU: 0.1, chr-F: 0.058
testset: URL, BLEU: 15.3, chr-F: 0.333
testset: URL, BLEU: 51.0, chr-F: 0.669
testset: URL, BLEU: 6.7, chr-F: 0.266
testset: URL, BLEU: 33.0, chr-F: 0.505
testset: URL, BLEU: 54.0, chr-F: 0.687
testset: URL, BLEU: 33.6, chr-F: 0.529
testset: URL, BLEU: 58.9, chr-F: 0.733
testset: URL, BLEU: 37.3, chr-F: 0.546
testset: URL, BLEU: 54.9, chr-F: 0.696
testset: URL, BLEU: 29.6, chr-F: 0.446
testset: URL, BLEU: 40.5, chr-F: 0.581
testset: URL, BLEU: 14.5, chr-F: 0.361
testset: URL, BLEU: 62.0, chr-F: 0.745
testset: URL, BLEU: 17.1, chr-F: 0.334
testset: URL, BLEU: 19.4, chr-F: 0.400
### System Info:
* hf\_name: gem-eng
* source\_languages: gem
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
* src\_constituents: {'ksh', 'enm\_Latn', 'got\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\_Hebr', 'ang\_Latn', 'frr', 'non\_Latn', 'yid', 'nds'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: gem
* tgt\_alpha3: eng
* short\_pair: gem-en
* chrF2\_score: 0.687
* bleu: 54.0
* brevity\_penalty: 0.993
* ref\_len: 72120.0
* src\_name: Germanic languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: gem
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: gem-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### gem-eng\n\n\n* source group: Germanic languages\n* target group: English\n* OPUS readme: gem-eng\n* model: transformer\n* source language(s): afr ang\\_Latn dan deu enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.542\ntestset: URL, BLEU: 26.3, chr-F: 0.536\ntestset: URL, BLEU: 25.1, chr-F: 0.531\ntestset: URL, BLEU: 28.3, chr-F: 0.569\ntestset: URL, BLEU: 26.0, chr-F: 0.543\ntestset: URL, BLEU: 26.8, chr-F: 0.550\ntestset: URL, BLEU: 30.2, chr-F: 0.570\ntestset: URL, BLEU: 30.7, chr-F: 0.574\ntestset: URL, BLEU: 32.1, chr-F: 0.581\ntestset: URL, BLEU: 36.9, chr-F: 0.624\ntestset: URL, BLEU: 32.8, chr-F: 0.588\ntestset: URL, BLEU: 40.2, chr-F: 0.640\ntestset: URL, BLEU: 36.8, chr-F: 0.614\ntestset: URL, BLEU: 62.8, chr-F: 0.758\ntestset: URL, BLEU: 10.5, chr-F: 0.262\ntestset: URL, BLEU: 61.6, chr-F: 0.754\ntestset: URL, BLEU: 49.7, chr-F: 0.665\ntestset: URL, BLEU: 23.9, chr-F: 0.491\ntestset: URL, BLEU: 23.4, chr-F: 0.446\ntestset: URL, BLEU: 10.2, chr-F: 0.184\ntestset: URL, BLEU: 29.6, chr-F: 0.486\ntestset: URL, BLEU: 17.8, chr-F: 0.352\ntestset: URL, BLEU: 0.1, chr-F: 0.058\ntestset: URL, BLEU: 15.3, chr-F: 0.333\ntestset: URL, BLEU: 51.0, chr-F: 0.669\ntestset: URL, BLEU: 6.7, chr-F: 0.266\ntestset: URL, BLEU: 33.0, chr-F: 0.505\ntestset: URL, BLEU: 54.0, chr-F: 0.687\ntestset: URL, BLEU: 33.6, chr-F: 0.529\ntestset: URL, BLEU: 58.9, chr-F: 0.733\ntestset: URL, BLEU: 37.3, chr-F: 0.546\ntestset: URL, BLEU: 54.9, chr-F: 0.696\ntestset: URL, BLEU: 29.6, chr-F: 0.446\ntestset: URL, BLEU: 40.5, chr-F: 0.581\ntestset: URL, BLEU: 14.5, chr-F: 0.361\ntestset: URL, BLEU: 62.0, chr-F: 0.745\ntestset: URL, BLEU: 17.1, chr-F: 0.334\ntestset: URL, BLEU: 19.4, chr-F: 0.400",
"### System Info:\n\n\n* hf\\_name: gem-eng\n* source\\_languages: gem\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']\n* src\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gem\n* tgt\\_alpha3: eng\n* short\\_pair: gem-en\n* chrF2\\_score: 0.687\n* bleu: 54.0\n* brevity\\_penalty: 0.993\n* ref\\_len: 72120.0\n* src\\_name: Germanic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: gem\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gem-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #da #sv #af #nn #fy #fo #de #nb #nl #is #en #lb #yi #gem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### gem-eng\n\n\n* source group: Germanic languages\n* target group: English\n* OPUS readme: gem-eng\n* model: transformer\n* source language(s): afr ang\\_Latn dan deu enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.542\ntestset: URL, BLEU: 26.3, chr-F: 0.536\ntestset: URL, BLEU: 25.1, chr-F: 0.531\ntestset: URL, BLEU: 28.3, chr-F: 0.569\ntestset: URL, BLEU: 26.0, chr-F: 0.543\ntestset: URL, BLEU: 26.8, chr-F: 0.550\ntestset: URL, BLEU: 30.2, chr-F: 0.570\ntestset: URL, BLEU: 30.7, chr-F: 0.574\ntestset: URL, BLEU: 32.1, chr-F: 0.581\ntestset: URL, BLEU: 36.9, chr-F: 0.624\ntestset: URL, BLEU: 32.8, chr-F: 0.588\ntestset: URL, BLEU: 40.2, chr-F: 0.640\ntestset: URL, BLEU: 36.8, chr-F: 0.614\ntestset: URL, BLEU: 62.8, chr-F: 0.758\ntestset: URL, BLEU: 10.5, chr-F: 0.262\ntestset: URL, BLEU: 61.6, chr-F: 0.754\ntestset: URL, BLEU: 49.7, chr-F: 0.665\ntestset: URL, BLEU: 23.9, chr-F: 0.491\ntestset: URL, BLEU: 23.4, chr-F: 0.446\ntestset: URL, BLEU: 10.2, chr-F: 0.184\ntestset: URL, BLEU: 29.6, chr-F: 0.486\ntestset: URL, BLEU: 17.8, chr-F: 0.352\ntestset: URL, BLEU: 0.1, chr-F: 0.058\ntestset: URL, BLEU: 15.3, chr-F: 0.333\ntestset: URL, BLEU: 51.0, chr-F: 0.669\ntestset: URL, BLEU: 6.7, chr-F: 0.266\ntestset: URL, BLEU: 33.0, chr-F: 0.505\ntestset: URL, BLEU: 54.0, chr-F: 0.687\ntestset: URL, BLEU: 33.6, chr-F: 0.529\ntestset: URL, BLEU: 58.9, chr-F: 0.733\ntestset: URL, BLEU: 37.3, chr-F: 0.546\ntestset: URL, BLEU: 54.9, chr-F: 0.696\ntestset: URL, BLEU: 29.6, chr-F: 0.446\ntestset: URL, BLEU: 40.5, chr-F: 0.581\ntestset: URL, BLEU: 14.5, chr-F: 0.361\ntestset: URL, BLEU: 62.0, chr-F: 0.745\ntestset: URL, BLEU: 17.1, chr-F: 0.334\ntestset: URL, BLEU: 19.4, chr-F: 0.400",
"### System Info:\n\n\n* hf\\_name: gem-eng\n* source\\_languages: gem\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']\n* src\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gem\n* tgt\\_alpha3: eng\n* short\\_pair: gem-en\n* chrF2\\_score: 0.687\n* bleu: 54.0\n* brevity\\_penalty: 0.993\n* ref\\_len: 72120.0\n* src\\_name: Germanic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: gem\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gem-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
79,
1036,
592
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #da #sv #af #nn #fy #fo #de #nb #nl #is #en #lb #yi #gem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### gem-eng\n\n\n* source group: Germanic languages\n* target group: English\n* OPUS readme: gem-eng\n* model: transformer\n* source language(s): afr ang\\_Latn dan deu enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.542\ntestset: URL, BLEU: 26.3, chr-F: 0.536\ntestset: URL, BLEU: 25.1, chr-F: 0.531\ntestset: URL, BLEU: 28.3, chr-F: 0.569\ntestset: URL, BLEU: 26.0, chr-F: 0.543\ntestset: URL, BLEU: 26.8, chr-F: 0.550\ntestset: URL, BLEU: 30.2, chr-F: 0.570\ntestset: URL, BLEU: 30.7, chr-F: 0.574\ntestset: URL, BLEU: 32.1, chr-F: 0.581\ntestset: URL, BLEU: 36.9, chr-F: 0.624\ntestset: URL, BLEU: 32.8, chr-F: 0.588\ntestset: URL, BLEU: 40.2, chr-F: 0.640\ntestset: URL, BLEU: 36.8, chr-F: 0.614\ntestset: URL, BLEU: 62.8, chr-F: 0.758\ntestset: URL, BLEU: 10.5, chr-F: 0.262\ntestset: URL, BLEU: 61.6, chr-F: 0.754\ntestset: URL, BLEU: 49.7, chr-F: 0.665\ntestset: URL, BLEU: 23.9, chr-F: 0.491\ntestset: URL, BLEU: 23.4, chr-F: 0.446\ntestset: URL, BLEU: 10.2, chr-F: 0.184\ntestset: URL, BLEU: 29.6, chr-F: 0.486\ntestset: URL, BLEU: 17.8, chr-F: 0.352\ntestset: URL, BLEU: 0.1, chr-F: 0.058\ntestset: URL, BLEU: 15.3, chr-F: 0.333\ntestset: URL, BLEU: 51.0, chr-F: 0.669\ntestset: URL, BLEU: 6.7, chr-F: 0.266\ntestset: URL, BLEU: 33.0, chr-F: 0.505\ntestset: URL, BLEU: 54.0, chr-F: 0.687\ntestset: URL, BLEU: 33.6, chr-F: 0.529\ntestset: URL, BLEU: 58.9, chr-F: 0.733\ntestset: URL, BLEU: 37.3, chr-F: 0.546\ntestset: URL, BLEU: 54.9, chr-F: 0.696\ntestset: URL, BLEU: 29.6, chr-F: 0.446\ntestset: URL, BLEU: 40.5, chr-F: 0.581\ntestset: URL, BLEU: 14.5, chr-F: 0.361\ntestset: URL, BLEU: 62.0, chr-F: 0.745\ntestset: URL, BLEU: 17.1, chr-F: 0.334\ntestset: URL, BLEU: 19.4, chr-F: 0.400### System Info:\n\n\n* hf\\_name: gem-eng\n* source\\_languages: gem\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']\n* src\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gem\n* tgt\\_alpha3: eng\n* short\\_pair: gem-en\n* chrF2\\_score: 0.687\n* bleu: 54.0\n* brevity\\_penalty: 0.993\n* ref\\_len: 72120.0\n* src\\_name: Germanic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: gem\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gem-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### gem-gem
* source group: Germanic languages
* target group: Germanic languages
* OPUS readme: [gem-gem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-gem/README.md)
* model: transformer
* source language(s): afr ang_Latn dan deu eng enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* target language(s): afr ang_Latn dan deu eng enm_Latn fao frr fry gos got_Goth gsw isl ksh ltz nds nld nno nob nob_Hebr non_Latn pdc sco stq swe swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 24.5 | 0.519 |
| newssyscomb2009-engdeu.eng.deu | 18.7 | 0.495 |
| news-test2008-deueng.deu.eng | 22.8 | 0.509 |
| news-test2008-engdeu.eng.deu | 18.6 | 0.485 |
| newstest2009-deueng.deu.eng | 22.2 | 0.507 |
| newstest2009-engdeu.eng.deu | 18.3 | 0.491 |
| newstest2010-deueng.deu.eng | 24.8 | 0.537 |
| newstest2010-engdeu.eng.deu | 19.7 | 0.499 |
| newstest2011-deueng.deu.eng | 22.9 | 0.516 |
| newstest2011-engdeu.eng.deu | 18.3 | 0.485 |
| newstest2012-deueng.deu.eng | 23.9 | 0.524 |
| newstest2012-engdeu.eng.deu | 18.5 | 0.484 |
| newstest2013-deueng.deu.eng | 26.3 | 0.537 |
| newstest2013-engdeu.eng.deu | 21.5 | 0.506 |
| newstest2014-deen-deueng.deu.eng | 25.7 | 0.535 |
| newstest2015-ende-deueng.deu.eng | 27.3 | 0.542 |
| newstest2015-ende-engdeu.eng.deu | 24.2 | 0.534 |
| newstest2016-ende-deueng.deu.eng | 31.8 | 0.584 |
| newstest2016-ende-engdeu.eng.deu | 28.4 | 0.564 |
| newstest2017-ende-deueng.deu.eng | 27.6 | 0.545 |
| newstest2017-ende-engdeu.eng.deu | 22.8 | 0.527 |
| newstest2018-ende-deueng.deu.eng | 34.1 | 0.593 |
| newstest2018-ende-engdeu.eng.deu | 32.7 | 0.595 |
| newstest2019-deen-deueng.deu.eng | 30.6 | 0.565 |
| newstest2019-ende-engdeu.eng.deu | 29.5 | 0.567 |
| Tatoeba-test.afr-ang.afr.ang | 0.0 | 0.053 |
| Tatoeba-test.afr-dan.afr.dan | 57.8 | 0.907 |
| Tatoeba-test.afr-deu.afr.deu | 46.4 | 0.663 |
| Tatoeba-test.afr-eng.afr.eng | 57.4 | 0.717 |
| Tatoeba-test.afr-enm.afr.enm | 11.3 | 0.285 |
| Tatoeba-test.afr-fry.afr.fry | 0.0 | 0.167 |
| Tatoeba-test.afr-gos.afr.gos | 1.5 | 0.178 |
| Tatoeba-test.afr-isl.afr.isl | 29.0 | 0.760 |
| Tatoeba-test.afr-ltz.afr.ltz | 11.2 | 0.246 |
| Tatoeba-test.afr-nld.afr.nld | 53.3 | 0.708 |
| Tatoeba-test.afr-nor.afr.nor | 66.0 | 0.752 |
| Tatoeba-test.afr-swe.afr.swe | 88.0 | 0.955 |
| Tatoeba-test.afr-yid.afr.yid | 59.5 | 0.443 |
| Tatoeba-test.ang-afr.ang.afr | 10.7 | 0.043 |
| Tatoeba-test.ang-dan.ang.dan | 6.3 | 0.190 |
| Tatoeba-test.ang-deu.ang.deu | 1.4 | 0.212 |
| Tatoeba-test.ang-eng.ang.eng | 8.1 | 0.247 |
| Tatoeba-test.ang-enm.ang.enm | 1.7 | 0.196 |
| Tatoeba-test.ang-fao.ang.fao | 10.7 | 0.105 |
| Tatoeba-test.ang-gos.ang.gos | 10.7 | 0.128 |
| Tatoeba-test.ang-isl.ang.isl | 16.0 | 0.135 |
| Tatoeba-test.ang-ltz.ang.ltz | 16.0 | 0.121 |
| Tatoeba-test.ang-yid.ang.yid | 1.5 | 0.136 |
| Tatoeba-test.dan-afr.dan.afr | 22.7 | 0.655 |
| Tatoeba-test.dan-ang.dan.ang | 3.1 | 0.110 |
| Tatoeba-test.dan-deu.dan.deu | 47.4 | 0.676 |
| Tatoeba-test.dan-eng.dan.eng | 54.7 | 0.704 |
| Tatoeba-test.dan-enm.dan.enm | 4.8 | 0.291 |
| Tatoeba-test.dan-fao.dan.fao | 9.7 | 0.120 |
| Tatoeba-test.dan-gos.dan.gos | 3.8 | 0.240 |
| Tatoeba-test.dan-isl.dan.isl | 66.1 | 0.678 |
| Tatoeba-test.dan-ltz.dan.ltz | 78.3 | 0.563 |
| Tatoeba-test.dan-nds.dan.nds | 6.2 | 0.335 |
| Tatoeba-test.dan-nld.dan.nld | 60.0 | 0.748 |
| Tatoeba-test.dan-nor.dan.nor | 68.1 | 0.812 |
| Tatoeba-test.dan-swe.dan.swe | 65.0 | 0.785 |
| Tatoeba-test.dan-swg.dan.swg | 2.6 | 0.182 |
| Tatoeba-test.dan-yid.dan.yid | 9.3 | 0.226 |
| Tatoeba-test.deu-afr.deu.afr | 50.3 | 0.682 |
| Tatoeba-test.deu-ang.deu.ang | 0.5 | 0.118 |
| Tatoeba-test.deu-dan.deu.dan | 49.6 | 0.679 |
| Tatoeba-test.deu-eng.deu.eng | 43.4 | 0.618 |
| Tatoeba-test.deu-enm.deu.enm | 2.2 | 0.159 |
| Tatoeba-test.deu-frr.deu.frr | 0.4 | 0.156 |
| Tatoeba-test.deu-fry.deu.fry | 10.7 | 0.355 |
| Tatoeba-test.deu-gos.deu.gos | 0.7 | 0.183 |
| Tatoeba-test.deu-got.deu.got | 0.3 | 0.010 |
| Tatoeba-test.deu-gsw.deu.gsw | 1.1 | 0.130 |
| Tatoeba-test.deu-isl.deu.isl | 24.3 | 0.504 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.9 | 0.173 |
| Tatoeba-test.deu-ltz.deu.ltz | 15.6 | 0.304 |
| Tatoeba-test.deu-nds.deu.nds | 21.2 | 0.469 |
| Tatoeba-test.deu-nld.deu.nld | 47.1 | 0.657 |
| Tatoeba-test.deu-nor.deu.nor | 43.9 | 0.646 |
| Tatoeba-test.deu-pdc.deu.pdc | 3.0 | 0.133 |
| Tatoeba-test.deu-sco.deu.sco | 12.0 | 0.296 |
| Tatoeba-test.deu-stq.deu.stq | 0.6 | 0.137 |
| Tatoeba-test.deu-swe.deu.swe | 50.6 | 0.668 |
| Tatoeba-test.deu-swg.deu.swg | 0.2 | 0.137 |
| Tatoeba-test.deu-yid.deu.yid | 3.9 | 0.229 |
| Tatoeba-test.eng-afr.eng.afr | 55.2 | 0.721 |
| Tatoeba-test.eng-ang.eng.ang | 4.9 | 0.118 |
| Tatoeba-test.eng-dan.eng.dan | 52.6 | 0.684 |
| Tatoeba-test.eng-deu.eng.deu | 35.4 | 0.573 |
| Tatoeba-test.eng-enm.eng.enm | 1.8 | 0.223 |
| Tatoeba-test.eng-fao.eng.fao | 7.0 | 0.312 |
| Tatoeba-test.eng-frr.eng.frr | 1.2 | 0.050 |
| Tatoeba-test.eng-fry.eng.fry | 15.8 | 0.381 |
| Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.170 |
| Tatoeba-test.eng-got.eng.got | 0.3 | 0.011 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.5 | 0.126 |
| Tatoeba-test.eng-isl.eng.isl | 20.9 | 0.463 |
| Tatoeba-test.eng-ksh.eng.ksh | 1.0 | 0.141 |
| Tatoeba-test.eng-ltz.eng.ltz | 12.8 | 0.292 |
| Tatoeba-test.eng-nds.eng.nds | 18.3 | 0.428 |
| Tatoeba-test.eng-nld.eng.nld | 47.3 | 0.657 |
| Tatoeba-test.eng-non.eng.non | 0.3 | 0.145 |
| Tatoeba-test.eng-nor.eng.nor | 47.2 | 0.650 |
| Tatoeba-test.eng-pdc.eng.pdc | 4.8 | 0.177 |
| Tatoeba-test.eng-sco.eng.sco | 38.1 | 0.597 |
| Tatoeba-test.eng-stq.eng.stq | 2.4 | 0.288 |
| Tatoeba-test.eng-swe.eng.swe | 52.7 | 0.677 |
| Tatoeba-test.eng-swg.eng.swg | 1.1 | 0.163 |
| Tatoeba-test.eng-yid.eng.yid | 4.5 | 0.223 |
| Tatoeba-test.enm-afr.enm.afr | 22.8 | 0.401 |
| Tatoeba-test.enm-ang.enm.ang | 0.4 | 0.062 |
| Tatoeba-test.enm-dan.enm.dan | 51.4 | 0.782 |
| Tatoeba-test.enm-deu.enm.deu | 33.8 | 0.473 |
| Tatoeba-test.enm-eng.enm.eng | 22.4 | 0.495 |
| Tatoeba-test.enm-fry.enm.fry | 16.0 | 0.173 |
| Tatoeba-test.enm-gos.enm.gos | 6.1 | 0.222 |
| Tatoeba-test.enm-isl.enm.isl | 59.5 | 0.651 |
| Tatoeba-test.enm-ksh.enm.ksh | 10.5 | 0.130 |
| Tatoeba-test.enm-nds.enm.nds | 18.1 | 0.327 |
| Tatoeba-test.enm-nld.enm.nld | 38.3 | 0.546 |
| Tatoeba-test.enm-nor.enm.nor | 15.6 | 0.290 |
| Tatoeba-test.enm-yid.enm.yid | 2.3 | 0.215 |
| Tatoeba-test.fao-ang.fao.ang | 2.1 | 0.035 |
| Tatoeba-test.fao-dan.fao.dan | 53.7 | 0.625 |
| Tatoeba-test.fao-eng.fao.eng | 24.7 | 0.435 |
| Tatoeba-test.fao-gos.fao.gos | 12.7 | 0.116 |
| Tatoeba-test.fao-isl.fao.isl | 26.3 | 0.341 |
| Tatoeba-test.fao-nor.fao.nor | 41.9 | 0.586 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 1.000 |
| Tatoeba-test.frr-deu.frr.deu | 7.4 | 0.263 |
| Tatoeba-test.frr-eng.frr.eng | 7.0 | 0.157 |
| Tatoeba-test.frr-fry.frr.fry | 4.0 | 0.112 |
| Tatoeba-test.frr-gos.frr.gos | 1.0 | 0.135 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.207 |
| Tatoeba-test.frr-nld.frr.nld | 10.6 | 0.227 |
| Tatoeba-test.frr-stq.frr.stq | 1.0 | 0.058 |
| Tatoeba-test.fry-afr.fry.afr | 12.7 | 0.333 |
| Tatoeba-test.fry-deu.fry.deu | 30.8 | 0.555 |
| Tatoeba-test.fry-eng.fry.eng | 31.2 | 0.506 |
| Tatoeba-test.fry-enm.fry.enm | 0.0 | 0.175 |
| Tatoeba-test.fry-frr.fry.frr | 1.6 | 0.091 |
| Tatoeba-test.fry-gos.fry.gos | 1.1 | 0.254 |
| Tatoeba-test.fry-ltz.fry.ltz | 30.4 | 0.526 |
| Tatoeba-test.fry-nds.fry.nds | 12.4 | 0.116 |
| Tatoeba-test.fry-nld.fry.nld | 43.4 | 0.637 |
| Tatoeba-test.fry-nor.fry.nor | 47.1 | 0.607 |
| Tatoeba-test.fry-stq.fry.stq | 0.6 | 0.181 |
| Tatoeba-test.fry-swe.fry.swe | 30.2 | 0.587 |
| Tatoeba-test.fry-yid.fry.yid | 3.1 | 0.173 |
| Tatoeba-test.gos-afr.gos.afr | 1.8 | 0.215 |
| Tatoeba-test.gos-ang.gos.ang | 0.0 | 0.045 |
| Tatoeba-test.gos-dan.gos.dan | 4.1 | 0.236 |
| Tatoeba-test.gos-deu.gos.deu | 19.6 | 0.406 |
| Tatoeba-test.gos-eng.gos.eng | 15.1 | 0.329 |
| Tatoeba-test.gos-enm.gos.enm | 5.8 | 0.271 |
| Tatoeba-test.gos-fao.gos.fao | 19.0 | 0.136 |
| Tatoeba-test.gos-frr.gos.frr | 1.3 | 0.119 |
| Tatoeba-test.gos-fry.gos.fry | 17.1 | 0.388 |
| Tatoeba-test.gos-isl.gos.isl | 16.8 | 0.356 |
| Tatoeba-test.gos-ltz.gos.ltz | 3.6 | 0.174 |
| Tatoeba-test.gos-nds.gos.nds | 4.7 | 0.225 |
| Tatoeba-test.gos-nld.gos.nld | 16.3 | 0.406 |
| Tatoeba-test.gos-stq.gos.stq | 0.7 | 0.154 |
| Tatoeba-test.gos-swe.gos.swe | 8.6 | 0.319 |
| Tatoeba-test.gos-yid.gos.yid | 4.4 | 0.165 |
| Tatoeba-test.got-deu.got.deu | 0.2 | 0.041 |
| Tatoeba-test.got-eng.got.eng | 0.2 | 0.068 |
| Tatoeba-test.got-nor.got.nor | 0.6 | 0.000 |
| Tatoeba-test.gsw-deu.gsw.deu | 15.9 | 0.373 |
| Tatoeba-test.gsw-eng.gsw.eng | 14.7 | 0.320 |
| Tatoeba-test.isl-afr.isl.afr | 38.0 | 0.641 |
| Tatoeba-test.isl-ang.isl.ang | 0.0 | 0.037 |
| Tatoeba-test.isl-dan.isl.dan | 67.7 | 0.836 |
| Tatoeba-test.isl-deu.isl.deu | 42.6 | 0.614 |
| Tatoeba-test.isl-eng.isl.eng | 43.5 | 0.610 |
| Tatoeba-test.isl-enm.isl.enm | 12.4 | 0.123 |
| Tatoeba-test.isl-fao.isl.fao | 15.6 | 0.176 |
| Tatoeba-test.isl-gos.isl.gos | 7.1 | 0.257 |
| Tatoeba-test.isl-nor.isl.nor | 53.5 | 0.690 |
| Tatoeba-test.isl-stq.isl.stq | 10.7 | 0.176 |
| Tatoeba-test.isl-swe.isl.swe | 67.7 | 0.818 |
| Tatoeba-test.ksh-deu.ksh.deu | 11.8 | 0.393 |
| Tatoeba-test.ksh-eng.ksh.eng | 4.0 | 0.239 |
| Tatoeba-test.ksh-enm.ksh.enm | 9.5 | 0.085 |
| Tatoeba-test.ltz-afr.ltz.afr | 36.5 | 0.529 |
| Tatoeba-test.ltz-ang.ltz.ang | 0.0 | 0.043 |
| Tatoeba-test.ltz-dan.ltz.dan | 80.6 | 0.722 |
| Tatoeba-test.ltz-deu.ltz.deu | 40.1 | 0.581 |
| Tatoeba-test.ltz-eng.ltz.eng | 36.1 | 0.511 |
| Tatoeba-test.ltz-fry.ltz.fry | 16.5 | 0.524 |
| Tatoeba-test.ltz-gos.ltz.gos | 0.7 | 0.118 |
| Tatoeba-test.ltz-nld.ltz.nld | 40.4 | 0.535 |
| Tatoeba-test.ltz-nor.ltz.nor | 19.1 | 0.582 |
| Tatoeba-test.ltz-stq.ltz.stq | 2.4 | 0.093 |
| Tatoeba-test.ltz-swe.ltz.swe | 25.9 | 0.430 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.5 | 0.160 |
| Tatoeba-test.multi.multi | 42.7 | 0.614 |
| Tatoeba-test.nds-dan.nds.dan | 23.0 | 0.465 |
| Tatoeba-test.nds-deu.nds.deu | 39.8 | 0.610 |
| Tatoeba-test.nds-eng.nds.eng | 32.0 | 0.520 |
| Tatoeba-test.nds-enm.nds.enm | 3.9 | 0.156 |
| Tatoeba-test.nds-frr.nds.frr | 10.7 | 0.127 |
| Tatoeba-test.nds-fry.nds.fry | 10.7 | 0.231 |
| Tatoeba-test.nds-gos.nds.gos | 0.8 | 0.157 |
| Tatoeba-test.nds-nld.nds.nld | 44.1 | 0.634 |
| Tatoeba-test.nds-nor.nds.nor | 47.1 | 0.665 |
| Tatoeba-test.nds-swg.nds.swg | 0.5 | 0.166 |
| Tatoeba-test.nds-yid.nds.yid | 12.7 | 0.337 |
| Tatoeba-test.nld-afr.nld.afr | 58.4 | 0.748 |
| Tatoeba-test.nld-dan.nld.dan | 61.3 | 0.753 |
| Tatoeba-test.nld-deu.nld.deu | 48.2 | 0.670 |
| Tatoeba-test.nld-eng.nld.eng | 52.8 | 0.690 |
| Tatoeba-test.nld-enm.nld.enm | 5.7 | 0.178 |
| Tatoeba-test.nld-frr.nld.frr | 0.9 | 0.159 |
| Tatoeba-test.nld-fry.nld.fry | 23.0 | 0.467 |
| Tatoeba-test.nld-gos.nld.gos | 1.0 | 0.165 |
| Tatoeba-test.nld-ltz.nld.ltz | 14.4 | 0.310 |
| Tatoeba-test.nld-nds.nld.nds | 24.1 | 0.485 |
| Tatoeba-test.nld-nor.nld.nor | 53.6 | 0.705 |
| Tatoeba-test.nld-sco.nld.sco | 15.0 | 0.415 |
| Tatoeba-test.nld-stq.nld.stq | 0.5 | 0.183 |
| Tatoeba-test.nld-swe.nld.swe | 73.6 | 0.842 |
| Tatoeba-test.nld-swg.nld.swg | 4.2 | 0.191 |
| Tatoeba-test.nld-yid.nld.yid | 9.4 | 0.299 |
| Tatoeba-test.non-eng.non.eng | 27.7 | 0.501 |
| Tatoeba-test.nor-afr.nor.afr | 48.2 | 0.687 |
| Tatoeba-test.nor-dan.nor.dan | 69.5 | 0.820 |
| Tatoeba-test.nor-deu.nor.deu | 41.1 | 0.634 |
| Tatoeba-test.nor-eng.nor.eng | 49.4 | 0.660 |
| Tatoeba-test.nor-enm.nor.enm | 6.8 | 0.230 |
| Tatoeba-test.nor-fao.nor.fao | 6.9 | 0.395 |
| Tatoeba-test.nor-fry.nor.fry | 9.2 | 0.323 |
| Tatoeba-test.nor-got.nor.got | 1.5 | 0.000 |
| Tatoeba-test.nor-isl.nor.isl | 34.5 | 0.555 |
| Tatoeba-test.nor-ltz.nor.ltz | 22.1 | 0.447 |
| Tatoeba-test.nor-nds.nor.nds | 34.3 | 0.565 |
| Tatoeba-test.nor-nld.nor.nld | 50.5 | 0.676 |
| Tatoeba-test.nor-nor.nor.nor | 57.6 | 0.764 |
| Tatoeba-test.nor-swe.nor.swe | 68.9 | 0.813 |
| Tatoeba-test.nor-yid.nor.yid | 65.0 | 0.627 |
| Tatoeba-test.pdc-deu.pdc.deu | 43.5 | 0.559 |
| Tatoeba-test.pdc-eng.pdc.eng | 26.1 | 0.471 |
| Tatoeba-test.sco-deu.sco.deu | 7.1 | 0.295 |
| Tatoeba-test.sco-eng.sco.eng | 34.4 | 0.551 |
| Tatoeba-test.sco-nld.sco.nld | 9.9 | 0.438 |
| Tatoeba-test.stq-deu.stq.deu | 8.6 | 0.385 |
| Tatoeba-test.stq-eng.stq.eng | 21.8 | 0.431 |
| Tatoeba-test.stq-frr.stq.frr | 2.1 | 0.111 |
| Tatoeba-test.stq-fry.stq.fry | 7.6 | 0.267 |
| Tatoeba-test.stq-gos.stq.gos | 0.7 | 0.198 |
| Tatoeba-test.stq-isl.stq.isl | 16.0 | 0.121 |
| Tatoeba-test.stq-ltz.stq.ltz | 3.8 | 0.150 |
| Tatoeba-test.stq-nld.stq.nld | 14.6 | 0.375 |
| Tatoeba-test.stq-yid.stq.yid | 2.4 | 0.096 |
| Tatoeba-test.swe-afr.swe.afr | 51.8 | 0.802 |
| Tatoeba-test.swe-dan.swe.dan | 64.9 | 0.784 |
| Tatoeba-test.swe-deu.swe.deu | 47.0 | 0.657 |
| Tatoeba-test.swe-eng.swe.eng | 55.8 | 0.700 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.060 |
| Tatoeba-test.swe-fry.swe.fry | 14.1 | 0.449 |
| Tatoeba-test.swe-gos.swe.gos | 7.5 | 0.291 |
| Tatoeba-test.swe-isl.swe.isl | 70.7 | 0.812 |
| Tatoeba-test.swe-ltz.swe.ltz | 15.9 | 0.553 |
| Tatoeba-test.swe-nld.swe.nld | 78.7 | 0.854 |
| Tatoeba-test.swe-nor.swe.nor | 67.1 | 0.799 |
| Tatoeba-test.swe-yid.swe.yid | 14.7 | 0.156 |
| Tatoeba-test.swg-dan.swg.dan | 7.7 | 0.341 |
| Tatoeba-test.swg-deu.swg.deu | 8.0 | 0.334 |
| Tatoeba-test.swg-eng.swg.eng | 12.4 | 0.305 |
| Tatoeba-test.swg-nds.swg.nds | 1.1 | 0.209 |
| Tatoeba-test.swg-nld.swg.nld | 4.9 | 0.244 |
| Tatoeba-test.swg-yid.swg.yid | 3.4 | 0.194 |
| Tatoeba-test.yid-afr.yid.afr | 23.6 | 0.552 |
| Tatoeba-test.yid-ang.yid.ang | 0.1 | 0.066 |
| Tatoeba-test.yid-dan.yid.dan | 17.5 | 0.392 |
| Tatoeba-test.yid-deu.yid.deu | 21.0 | 0.423 |
| Tatoeba-test.yid-eng.yid.eng | 17.4 | 0.368 |
| Tatoeba-test.yid-enm.yid.enm | 0.6 | 0.143 |
| Tatoeba-test.yid-fry.yid.fry | 5.3 | 0.169 |
| Tatoeba-test.yid-gos.yid.gos | 1.2 | 0.149 |
| Tatoeba-test.yid-ltz.yid.ltz | 3.5 | 0.256 |
| Tatoeba-test.yid-nds.yid.nds | 14.4 | 0.487 |
| Tatoeba-test.yid-nld.yid.nld | 26.1 | 0.423 |
| Tatoeba-test.yid-nor.yid.nor | 47.1 | 0.583 |
| Tatoeba-test.yid-stq.yid.stq | 1.5 | 0.092 |
| Tatoeba-test.yid-swe.yid.swe | 35.9 | 0.518 |
| Tatoeba-test.yid-swg.yid.swg | 1.0 | 0.124 |
### System Info:
- hf_name: gem-gem
- source_languages: gem
- target_languages: gem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gem-gem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
- src_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- tgt_constituents: {'ksh', 'enm_Latn', 'got_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob_Hebr', 'ang_Latn', 'frr', 'non_Latn', 'yid', 'nds'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gem-gem/opus-2020-07-27.test.txt
- src_alpha3: gem
- tgt_alpha3: gem
- short_pair: gem-gem
- chrF2_score: 0.614
- bleu: 42.7
- brevity_penalty: 0.993
- ref_len: 73459.0
- src_name: Germanic languages
- tgt_name: Germanic languages
- train_date: 2020-07-27
- src_alpha2: gem
- tgt_alpha2: gem
- prefer_old: False
- long_pair: gem-gem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["da", "sv", "af", "nn", "fy", "fo", "de", "nb", "nl", "is", "en", "lb", "yi", "gem"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gem-gem | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"en",
"lb",
"yi",
"gem",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"da",
"sv",
"af",
"nn",
"fy",
"fo",
"de",
"nb",
"nl",
"is",
"en",
"lb",
"yi",
"gem"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #da #sv #af #nn #fy #fo #de #nb #nl #is #en #lb #yi #gem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### gem-gem
* source group: Germanic languages
* target group: Germanic languages
* OPUS readme: gem-gem
* model: transformer
* source language(s): afr ang\_Latn dan deu eng enm\_Latn fao frr fry gos got\_Goth gsw isl ksh ltz nds nld nno nob nob\_Hebr non\_Latn pdc sco stq swe swg yid
* target language(s): afr ang\_Latn dan deu eng enm\_Latn fao frr fry gos got\_Goth gsw isl ksh ltz nds nld nno nob nob\_Hebr non\_Latn pdc sco stq swe swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.5, chr-F: 0.519
testset: URL, BLEU: 18.7, chr-F: 0.495
testset: URL, BLEU: 22.8, chr-F: 0.509
testset: URL, BLEU: 18.6, chr-F: 0.485
testset: URL, BLEU: 22.2, chr-F: 0.507
testset: URL, BLEU: 18.3, chr-F: 0.491
testset: URL, BLEU: 24.8, chr-F: 0.537
testset: URL, BLEU: 19.7, chr-F: 0.499
testset: URL, BLEU: 22.9, chr-F: 0.516
testset: URL, BLEU: 18.3, chr-F: 0.485
testset: URL, BLEU: 23.9, chr-F: 0.524
testset: URL, BLEU: 18.5, chr-F: 0.484
testset: URL, BLEU: 26.3, chr-F: 0.537
testset: URL, BLEU: 21.5, chr-F: 0.506
testset: URL, BLEU: 25.7, chr-F: 0.535
testset: URL, BLEU: 27.3, chr-F: 0.542
testset: URL, BLEU: 24.2, chr-F: 0.534
testset: URL, BLEU: 31.8, chr-F: 0.584
testset: URL, BLEU: 28.4, chr-F: 0.564
testset: URL, BLEU: 27.6, chr-F: 0.545
testset: URL, BLEU: 22.8, chr-F: 0.527
testset: URL, BLEU: 34.1, chr-F: 0.593
testset: URL, BLEU: 32.7, chr-F: 0.595
testset: URL, BLEU: 30.6, chr-F: 0.565
testset: URL, BLEU: 29.5, chr-F: 0.567
testset: URL, BLEU: 0.0, chr-F: 0.053
testset: URL, BLEU: 57.8, chr-F: 0.907
testset: URL, BLEU: 46.4, chr-F: 0.663
testset: URL, BLEU: 57.4, chr-F: 0.717
testset: URL, BLEU: 11.3, chr-F: 0.285
testset: URL, BLEU: 0.0, chr-F: 0.167
testset: URL, BLEU: 1.5, chr-F: 0.178
testset: URL, BLEU: 29.0, chr-F: 0.760
testset: URL, BLEU: 11.2, chr-F: 0.246
testset: URL, BLEU: 53.3, chr-F: 0.708
testset: URL, BLEU: 66.0, chr-F: 0.752
testset: URL, BLEU: 88.0, chr-F: 0.955
testset: URL, BLEU: 59.5, chr-F: 0.443
testset: URL, BLEU: 10.7, chr-F: 0.043
testset: URL, BLEU: 6.3, chr-F: 0.190
testset: URL, BLEU: 1.4, chr-F: 0.212
testset: URL, BLEU: 8.1, chr-F: 0.247
testset: URL, BLEU: 1.7, chr-F: 0.196
testset: URL, BLEU: 10.7, chr-F: 0.105
testset: URL, BLEU: 10.7, chr-F: 0.128
testset: URL, BLEU: 16.0, chr-F: 0.135
testset: URL, BLEU: 16.0, chr-F: 0.121
testset: URL, BLEU: 1.5, chr-F: 0.136
testset: URL, BLEU: 22.7, chr-F: 0.655
testset: URL, BLEU: 3.1, chr-F: 0.110
testset: URL, BLEU: 47.4, chr-F: 0.676
testset: URL, BLEU: 54.7, chr-F: 0.704
testset: URL, BLEU: 4.8, chr-F: 0.291
testset: URL, BLEU: 9.7, chr-F: 0.120
testset: URL, BLEU: 3.8, chr-F: 0.240
testset: URL, BLEU: 66.1, chr-F: 0.678
testset: URL, BLEU: 78.3, chr-F: 0.563
testset: URL, BLEU: 6.2, chr-F: 0.335
testset: URL, BLEU: 60.0, chr-F: 0.748
testset: URL, BLEU: 68.1, chr-F: 0.812
testset: URL, BLEU: 65.0, chr-F: 0.785
testset: URL, BLEU: 2.6, chr-F: 0.182
testset: URL, BLEU: 9.3, chr-F: 0.226
testset: URL, BLEU: 50.3, chr-F: 0.682
testset: URL, BLEU: 0.5, chr-F: 0.118
testset: URL, BLEU: 49.6, chr-F: 0.679
testset: URL, BLEU: 43.4, chr-F: 0.618
testset: URL, BLEU: 2.2, chr-F: 0.159
testset: URL, BLEU: 0.4, chr-F: 0.156
testset: URL, BLEU: 10.7, chr-F: 0.355
testset: URL, BLEU: 0.7, chr-F: 0.183
testset: URL, BLEU: 0.3, chr-F: 0.010
testset: URL, BLEU: 1.1, chr-F: 0.130
testset: URL, BLEU: 24.3, chr-F: 0.504
testset: URL, BLEU: 0.9, chr-F: 0.173
testset: URL, BLEU: 15.6, chr-F: 0.304
testset: URL, BLEU: 21.2, chr-F: 0.469
testset: URL, BLEU: 47.1, chr-F: 0.657
testset: URL, BLEU: 43.9, chr-F: 0.646
testset: URL, BLEU: 3.0, chr-F: 0.133
testset: URL, BLEU: 12.0, chr-F: 0.296
testset: URL, BLEU: 0.6, chr-F: 0.137
testset: URL, BLEU: 50.6, chr-F: 0.668
testset: URL, BLEU: 0.2, chr-F: 0.137
testset: URL, BLEU: 3.9, chr-F: 0.229
testset: URL, BLEU: 55.2, chr-F: 0.721
testset: URL, BLEU: 4.9, chr-F: 0.118
testset: URL, BLEU: 52.6, chr-F: 0.684
testset: URL, BLEU: 35.4, chr-F: 0.573
testset: URL, BLEU: 1.8, chr-F: 0.223
testset: URL, BLEU: 7.0, chr-F: 0.312
testset: URL, BLEU: 1.2, chr-F: 0.050
testset: URL, BLEU: 15.8, chr-F: 0.381
testset: URL, BLEU: 0.7, chr-F: 0.170
testset: URL, BLEU: 0.3, chr-F: 0.011
testset: URL, BLEU: 0.5, chr-F: 0.126
testset: URL, BLEU: 20.9, chr-F: 0.463
testset: URL, BLEU: 1.0, chr-F: 0.141
testset: URL, BLEU: 12.8, chr-F: 0.292
testset: URL, BLEU: 18.3, chr-F: 0.428
testset: URL, BLEU: 47.3, chr-F: 0.657
testset: URL, BLEU: 0.3, chr-F: 0.145
testset: URL, BLEU: 47.2, chr-F: 0.650
testset: URL, BLEU: 4.8, chr-F: 0.177
testset: URL, BLEU: 38.1, chr-F: 0.597
testset: URL, BLEU: 2.4, chr-F: 0.288
testset: URL, BLEU: 52.7, chr-F: 0.677
testset: URL, BLEU: 1.1, chr-F: 0.163
testset: URL, BLEU: 4.5, chr-F: 0.223
testset: URL, BLEU: 22.8, chr-F: 0.401
testset: URL, BLEU: 0.4, chr-F: 0.062
testset: URL, BLEU: 51.4, chr-F: 0.782
testset: URL, BLEU: 33.8, chr-F: 0.473
testset: URL, BLEU: 22.4, chr-F: 0.495
testset: URL, BLEU: 16.0, chr-F: 0.173
testset: URL, BLEU: 6.1, chr-F: 0.222
testset: URL, BLEU: 59.5, chr-F: 0.651
testset: URL, BLEU: 10.5, chr-F: 0.130
testset: URL, BLEU: 18.1, chr-F: 0.327
testset: URL, BLEU: 38.3, chr-F: 0.546
testset: URL, BLEU: 15.6, chr-F: 0.290
testset: URL, BLEU: 2.3, chr-F: 0.215
testset: URL, BLEU: 2.1, chr-F: 0.035
testset: URL, BLEU: 53.7, chr-F: 0.625
testset: URL, BLEU: 24.7, chr-F: 0.435
testset: URL, BLEU: 12.7, chr-F: 0.116
testset: URL, BLEU: 26.3, chr-F: 0.341
testset: URL, BLEU: 41.9, chr-F: 0.586
testset: URL, BLEU: 0.0, chr-F: 1.000
testset: URL, BLEU: 7.4, chr-F: 0.263
testset: URL, BLEU: 7.0, chr-F: 0.157
testset: URL, BLEU: 4.0, chr-F: 0.112
testset: URL, BLEU: 1.0, chr-F: 0.135
testset: URL, BLEU: 12.4, chr-F: 0.207
testset: URL, BLEU: 10.6, chr-F: 0.227
testset: URL, BLEU: 1.0, chr-F: 0.058
testset: URL, BLEU: 12.7, chr-F: 0.333
testset: URL, BLEU: 30.8, chr-F: 0.555
testset: URL, BLEU: 31.2, chr-F: 0.506
testset: URL, BLEU: 0.0, chr-F: 0.175
testset: URL, BLEU: 1.6, chr-F: 0.091
testset: URL, BLEU: 1.1, chr-F: 0.254
testset: URL, BLEU: 30.4, chr-F: 0.526
testset: URL, BLEU: 12.4, chr-F: 0.116
testset: URL, BLEU: 43.4, chr-F: 0.637
testset: URL, BLEU: 47.1, chr-F: 0.607
testset: URL, BLEU: 0.6, chr-F: 0.181
testset: URL, BLEU: 30.2, chr-F: 0.587
testset: URL, BLEU: 3.1, chr-F: 0.173
testset: URL, BLEU: 1.8, chr-F: 0.215
testset: URL, BLEU: 0.0, chr-F: 0.045
testset: URL, BLEU: 4.1, chr-F: 0.236
testset: URL, BLEU: 19.6, chr-F: 0.406
testset: URL, BLEU: 15.1, chr-F: 0.329
testset: URL, BLEU: 5.8, chr-F: 0.271
testset: URL, BLEU: 19.0, chr-F: 0.136
testset: URL, BLEU: 1.3, chr-F: 0.119
testset: URL, BLEU: 17.1, chr-F: 0.388
testset: URL, BLEU: 16.8, chr-F: 0.356
testset: URL, BLEU: 3.6, chr-F: 0.174
testset: URL, BLEU: 4.7, chr-F: 0.225
testset: URL, BLEU: 16.3, chr-F: 0.406
testset: URL, BLEU: 0.7, chr-F: 0.154
testset: URL, BLEU: 8.6, chr-F: 0.319
testset: URL, BLEU: 4.4, chr-F: 0.165
testset: URL, BLEU: 0.2, chr-F: 0.041
testset: URL, BLEU: 0.2, chr-F: 0.068
testset: URL, BLEU: 0.6, chr-F: 0.000
testset: URL, BLEU: 15.9, chr-F: 0.373
testset: URL, BLEU: 14.7, chr-F: 0.320
testset: URL, BLEU: 38.0, chr-F: 0.641
testset: URL, BLEU: 0.0, chr-F: 0.037
testset: URL, BLEU: 67.7, chr-F: 0.836
testset: URL, BLEU: 42.6, chr-F: 0.614
testset: URL, BLEU: 43.5, chr-F: 0.610
testset: URL, BLEU: 12.4, chr-F: 0.123
testset: URL, BLEU: 15.6, chr-F: 0.176
testset: URL, BLEU: 7.1, chr-F: 0.257
testset: URL, BLEU: 53.5, chr-F: 0.690
testset: URL, BLEU: 10.7, chr-F: 0.176
testset: URL, BLEU: 67.7, chr-F: 0.818
testset: URL, BLEU: 11.8, chr-F: 0.393
testset: URL, BLEU: 4.0, chr-F: 0.239
testset: URL, BLEU: 9.5, chr-F: 0.085
testset: URL, BLEU: 36.5, chr-F: 0.529
testset: URL, BLEU: 0.0, chr-F: 0.043
testset: URL, BLEU: 80.6, chr-F: 0.722
testset: URL, BLEU: 40.1, chr-F: 0.581
testset: URL, BLEU: 36.1, chr-F: 0.511
testset: URL, BLEU: 16.5, chr-F: 0.524
testset: URL, BLEU: 0.7, chr-F: 0.118
testset: URL, BLEU: 40.4, chr-F: 0.535
testset: URL, BLEU: 19.1, chr-F: 0.582
testset: URL, BLEU: 2.4, chr-F: 0.093
testset: URL, BLEU: 25.9, chr-F: 0.430
testset: URL, BLEU: 1.5, chr-F: 0.160
testset: URL, BLEU: 42.7, chr-F: 0.614
testset: URL, BLEU: 23.0, chr-F: 0.465
testset: URL, BLEU: 39.8, chr-F: 0.610
testset: URL, BLEU: 32.0, chr-F: 0.520
testset: URL, BLEU: 3.9, chr-F: 0.156
testset: URL, BLEU: 10.7, chr-F: 0.127
testset: URL, BLEU: 10.7, chr-F: 0.231
testset: URL, BLEU: 0.8, chr-F: 0.157
testset: URL, BLEU: 44.1, chr-F: 0.634
testset: URL, BLEU: 47.1, chr-F: 0.665
testset: URL, BLEU: 0.5, chr-F: 0.166
testset: URL, BLEU: 12.7, chr-F: 0.337
testset: URL, BLEU: 58.4, chr-F: 0.748
testset: URL, BLEU: 61.3, chr-F: 0.753
testset: URL, BLEU: 48.2, chr-F: 0.670
testset: URL, BLEU: 52.8, chr-F: 0.690
testset: URL, BLEU: 5.7, chr-F: 0.178
testset: URL, BLEU: 0.9, chr-F: 0.159
testset: URL, BLEU: 23.0, chr-F: 0.467
testset: URL, BLEU: 1.0, chr-F: 0.165
testset: URL, BLEU: 14.4, chr-F: 0.310
testset: URL, BLEU: 24.1, chr-F: 0.485
testset: URL, BLEU: 53.6, chr-F: 0.705
testset: URL, BLEU: 15.0, chr-F: 0.415
testset: URL, BLEU: 0.5, chr-F: 0.183
testset: URL, BLEU: 73.6, chr-F: 0.842
testset: URL, BLEU: 4.2, chr-F: 0.191
testset: URL, BLEU: 9.4, chr-F: 0.299
testset: URL, BLEU: 27.7, chr-F: 0.501
testset: URL, BLEU: 48.2, chr-F: 0.687
testset: URL, BLEU: 69.5, chr-F: 0.820
testset: URL, BLEU: 41.1, chr-F: 0.634
testset: URL, BLEU: 49.4, chr-F: 0.660
testset: URL, BLEU: 6.8, chr-F: 0.230
testset: URL, BLEU: 6.9, chr-F: 0.395
testset: URL, BLEU: 9.2, chr-F: 0.323
testset: URL, BLEU: 1.5, chr-F: 0.000
testset: URL, BLEU: 34.5, chr-F: 0.555
testset: URL, BLEU: 22.1, chr-F: 0.447
testset: URL, BLEU: 34.3, chr-F: 0.565
testset: URL, BLEU: 50.5, chr-F: 0.676
testset: URL, BLEU: 57.6, chr-F: 0.764
testset: URL, BLEU: 68.9, chr-F: 0.813
testset: URL, BLEU: 65.0, chr-F: 0.627
testset: URL, BLEU: 43.5, chr-F: 0.559
testset: URL, BLEU: 26.1, chr-F: 0.471
testset: URL, BLEU: 7.1, chr-F: 0.295
testset: URL, BLEU: 34.4, chr-F: 0.551
testset: URL, BLEU: 9.9, chr-F: 0.438
testset: URL, BLEU: 8.6, chr-F: 0.385
testset: URL, BLEU: 21.8, chr-F: 0.431
testset: URL, BLEU: 2.1, chr-F: 0.111
testset: URL, BLEU: 7.6, chr-F: 0.267
testset: URL, BLEU: 0.7, chr-F: 0.198
testset: URL, BLEU: 16.0, chr-F: 0.121
testset: URL, BLEU: 3.8, chr-F: 0.150
testset: URL, BLEU: 14.6, chr-F: 0.375
testset: URL, BLEU: 2.4, chr-F: 0.096
testset: URL, BLEU: 51.8, chr-F: 0.802
testset: URL, BLEU: 64.9, chr-F: 0.784
testset: URL, BLEU: 47.0, chr-F: 0.657
testset: URL, BLEU: 55.8, chr-F: 0.700
testset: URL, BLEU: 0.0, chr-F: 0.060
testset: URL, BLEU: 14.1, chr-F: 0.449
testset: URL, BLEU: 7.5, chr-F: 0.291
testset: URL, BLEU: 70.7, chr-F: 0.812
testset: URL, BLEU: 15.9, chr-F: 0.553
testset: URL, BLEU: 78.7, chr-F: 0.854
testset: URL, BLEU: 67.1, chr-F: 0.799
testset: URL, BLEU: 14.7, chr-F: 0.156
testset: URL, BLEU: 7.7, chr-F: 0.341
testset: URL, BLEU: 8.0, chr-F: 0.334
testset: URL, BLEU: 12.4, chr-F: 0.305
testset: URL, BLEU: 1.1, chr-F: 0.209
testset: URL, BLEU: 4.9, chr-F: 0.244
testset: URL, BLEU: 3.4, chr-F: 0.194
testset: URL, BLEU: 23.6, chr-F: 0.552
testset: URL, BLEU: 0.1, chr-F: 0.066
testset: URL, BLEU: 17.5, chr-F: 0.392
testset: URL, BLEU: 21.0, chr-F: 0.423
testset: URL, BLEU: 17.4, chr-F: 0.368
testset: URL, BLEU: 0.6, chr-F: 0.143
testset: URL, BLEU: 5.3, chr-F: 0.169
testset: URL, BLEU: 1.2, chr-F: 0.149
testset: URL, BLEU: 3.5, chr-F: 0.256
testset: URL, BLEU: 14.4, chr-F: 0.487
testset: URL, BLEU: 26.1, chr-F: 0.423
testset: URL, BLEU: 47.1, chr-F: 0.583
testset: URL, BLEU: 1.5, chr-F: 0.092
testset: URL, BLEU: 35.9, chr-F: 0.518
testset: URL, BLEU: 1.0, chr-F: 0.124
### System Info:
* hf\_name: gem-gem
* source\_languages: gem
* target\_languages: gem
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']
* src\_constituents: {'ksh', 'enm\_Latn', 'got\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\_Hebr', 'ang\_Latn', 'frr', 'non\_Latn', 'yid', 'nds'}
* tgt\_constituents: {'ksh', 'enm\_Latn', 'got\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\_Hebr', 'ang\_Latn', 'frr', 'non\_Latn', 'yid', 'nds'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: gem
* tgt\_alpha3: gem
* short\_pair: gem-gem
* chrF2\_score: 0.614
* bleu: 42.7
* brevity\_penalty: 0.993
* ref\_len: 73459.0
* src\_name: Germanic languages
* tgt\_name: Germanic languages
* train\_date: 2020-07-27
* src\_alpha2: gem
* tgt\_alpha2: gem
* prefer\_old: False
* long\_pair: gem-gem
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### gem-gem\n\n\n* source group: Germanic languages\n* target group: Germanic languages\n* OPUS readme: gem-gem\n* model: transformer\n* source language(s): afr ang\\_Latn dan deu eng enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* target language(s): afr ang\\_Latn dan deu eng enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.519\ntestset: URL, BLEU: 18.7, chr-F: 0.495\ntestset: URL, BLEU: 22.8, chr-F: 0.509\ntestset: URL, BLEU: 18.6, chr-F: 0.485\ntestset: URL, BLEU: 22.2, chr-F: 0.507\ntestset: URL, BLEU: 18.3, chr-F: 0.491\ntestset: URL, BLEU: 24.8, chr-F: 0.537\ntestset: URL, BLEU: 19.7, chr-F: 0.499\ntestset: URL, BLEU: 22.9, chr-F: 0.516\ntestset: URL, BLEU: 18.3, chr-F: 0.485\ntestset: URL, BLEU: 23.9, chr-F: 0.524\ntestset: URL, BLEU: 18.5, chr-F: 0.484\ntestset: URL, BLEU: 26.3, chr-F: 0.537\ntestset: URL, BLEU: 21.5, chr-F: 0.506\ntestset: URL, BLEU: 25.7, chr-F: 0.535\ntestset: URL, BLEU: 27.3, chr-F: 0.542\ntestset: URL, BLEU: 24.2, chr-F: 0.534\ntestset: URL, BLEU: 31.8, chr-F: 0.584\ntestset: URL, BLEU: 28.4, chr-F: 0.564\ntestset: URL, BLEU: 27.6, chr-F: 0.545\ntestset: URL, BLEU: 22.8, chr-F: 0.527\ntestset: URL, BLEU: 34.1, chr-F: 0.593\ntestset: URL, BLEU: 32.7, chr-F: 0.595\ntestset: URL, BLEU: 30.6, chr-F: 0.565\ntestset: URL, BLEU: 29.5, chr-F: 0.567\ntestset: URL, BLEU: 0.0, chr-F: 0.053\ntestset: URL, BLEU: 57.8, chr-F: 0.907\ntestset: URL, BLEU: 46.4, chr-F: 0.663\ntestset: URL, BLEU: 57.4, chr-F: 0.717\ntestset: URL, BLEU: 11.3, chr-F: 0.285\ntestset: URL, BLEU: 0.0, chr-F: 0.167\ntestset: URL, BLEU: 1.5, chr-F: 0.178\ntestset: URL, BLEU: 29.0, chr-F: 0.760\ntestset: URL, BLEU: 11.2, chr-F: 0.246\ntestset: URL, BLEU: 53.3, chr-F: 0.708\ntestset: URL, BLEU: 66.0, chr-F: 0.752\ntestset: URL, BLEU: 88.0, chr-F: 0.955\ntestset: URL, BLEU: 59.5, chr-F: 0.443\ntestset: URL, BLEU: 10.7, chr-F: 0.043\ntestset: URL, BLEU: 6.3, chr-F: 0.190\ntestset: URL, BLEU: 1.4, chr-F: 0.212\ntestset: URL, BLEU: 8.1, chr-F: 0.247\ntestset: URL, BLEU: 1.7, chr-F: 0.196\ntestset: URL, BLEU: 10.7, chr-F: 0.105\ntestset: URL, BLEU: 10.7, chr-F: 0.128\ntestset: URL, BLEU: 16.0, chr-F: 0.135\ntestset: URL, BLEU: 16.0, chr-F: 0.121\ntestset: URL, BLEU: 1.5, chr-F: 0.136\ntestset: URL, BLEU: 22.7, chr-F: 0.655\ntestset: URL, BLEU: 3.1, chr-F: 0.110\ntestset: URL, BLEU: 47.4, chr-F: 0.676\ntestset: URL, BLEU: 54.7, chr-F: 0.704\ntestset: URL, BLEU: 4.8, chr-F: 0.291\ntestset: URL, BLEU: 9.7, chr-F: 0.120\ntestset: URL, BLEU: 3.8, chr-F: 0.240\ntestset: URL, BLEU: 66.1, chr-F: 0.678\ntestset: URL, BLEU: 78.3, chr-F: 0.563\ntestset: URL, BLEU: 6.2, chr-F: 0.335\ntestset: URL, BLEU: 60.0, chr-F: 0.748\ntestset: URL, BLEU: 68.1, chr-F: 0.812\ntestset: URL, BLEU: 65.0, chr-F: 0.785\ntestset: URL, BLEU: 2.6, chr-F: 0.182\ntestset: URL, BLEU: 9.3, chr-F: 0.226\ntestset: URL, BLEU: 50.3, chr-F: 0.682\ntestset: URL, BLEU: 0.5, chr-F: 0.118\ntestset: URL, BLEU: 49.6, chr-F: 0.679\ntestset: URL, BLEU: 43.4, chr-F: 0.618\ntestset: URL, BLEU: 2.2, chr-F: 0.159\ntestset: URL, BLEU: 0.4, chr-F: 0.156\ntestset: URL, BLEU: 10.7, chr-F: 0.355\ntestset: URL, BLEU: 0.7, chr-F: 0.183\ntestset: URL, BLEU: 0.3, chr-F: 0.010\ntestset: URL, BLEU: 1.1, chr-F: 0.130\ntestset: URL, BLEU: 24.3, chr-F: 0.504\ntestset: URL, BLEU: 0.9, chr-F: 0.173\ntestset: URL, BLEU: 15.6, chr-F: 0.304\ntestset: URL, BLEU: 21.2, chr-F: 0.469\ntestset: URL, BLEU: 47.1, chr-F: 0.657\ntestset: URL, BLEU: 43.9, chr-F: 0.646\ntestset: URL, BLEU: 3.0, chr-F: 0.133\ntestset: URL, BLEU: 12.0, chr-F: 0.296\ntestset: URL, BLEU: 0.6, chr-F: 0.137\ntestset: URL, BLEU: 50.6, chr-F: 0.668\ntestset: URL, BLEU: 0.2, chr-F: 0.137\ntestset: URL, BLEU: 3.9, chr-F: 0.229\ntestset: URL, BLEU: 55.2, chr-F: 0.721\ntestset: URL, BLEU: 4.9, chr-F: 0.118\ntestset: URL, BLEU: 52.6, chr-F: 0.684\ntestset: URL, BLEU: 35.4, chr-F: 0.573\ntestset: URL, BLEU: 1.8, chr-F: 0.223\ntestset: URL, BLEU: 7.0, chr-F: 0.312\ntestset: URL, BLEU: 1.2, chr-F: 0.050\ntestset: URL, BLEU: 15.8, chr-F: 0.381\ntestset: URL, BLEU: 0.7, chr-F: 0.170\ntestset: URL, BLEU: 0.3, chr-F: 0.011\ntestset: URL, BLEU: 0.5, chr-F: 0.126\ntestset: URL, BLEU: 20.9, chr-F: 0.463\ntestset: URL, BLEU: 1.0, chr-F: 0.141\ntestset: URL, BLEU: 12.8, chr-F: 0.292\ntestset: URL, BLEU: 18.3, chr-F: 0.428\ntestset: URL, BLEU: 47.3, chr-F: 0.657\ntestset: URL, BLEU: 0.3, chr-F: 0.145\ntestset: URL, BLEU: 47.2, chr-F: 0.650\ntestset: URL, BLEU: 4.8, chr-F: 0.177\ntestset: URL, BLEU: 38.1, chr-F: 0.597\ntestset: URL, BLEU: 2.4, chr-F: 0.288\ntestset: URL, BLEU: 52.7, chr-F: 0.677\ntestset: URL, BLEU: 1.1, chr-F: 0.163\ntestset: URL, BLEU: 4.5, chr-F: 0.223\ntestset: URL, BLEU: 22.8, chr-F: 0.401\ntestset: URL, BLEU: 0.4, chr-F: 0.062\ntestset: URL, BLEU: 51.4, chr-F: 0.782\ntestset: URL, BLEU: 33.8, chr-F: 0.473\ntestset: URL, BLEU: 22.4, chr-F: 0.495\ntestset: URL, BLEU: 16.0, chr-F: 0.173\ntestset: URL, BLEU: 6.1, chr-F: 0.222\ntestset: URL, BLEU: 59.5, chr-F: 0.651\ntestset: URL, BLEU: 10.5, chr-F: 0.130\ntestset: URL, BLEU: 18.1, chr-F: 0.327\ntestset: URL, BLEU: 38.3, chr-F: 0.546\ntestset: URL, BLEU: 15.6, chr-F: 0.290\ntestset: URL, BLEU: 2.3, chr-F: 0.215\ntestset: URL, BLEU: 2.1, chr-F: 0.035\ntestset: URL, BLEU: 53.7, chr-F: 0.625\ntestset: URL, BLEU: 24.7, chr-F: 0.435\ntestset: URL, BLEU: 12.7, chr-F: 0.116\ntestset: URL, BLEU: 26.3, chr-F: 0.341\ntestset: URL, BLEU: 41.9, chr-F: 0.586\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 7.4, chr-F: 0.263\ntestset: URL, BLEU: 7.0, chr-F: 0.157\ntestset: URL, BLEU: 4.0, chr-F: 0.112\ntestset: URL, BLEU: 1.0, chr-F: 0.135\ntestset: URL, BLEU: 12.4, chr-F: 0.207\ntestset: URL, BLEU: 10.6, chr-F: 0.227\ntestset: URL, BLEU: 1.0, chr-F: 0.058\ntestset: URL, BLEU: 12.7, chr-F: 0.333\ntestset: URL, BLEU: 30.8, chr-F: 0.555\ntestset: URL, BLEU: 31.2, chr-F: 0.506\ntestset: URL, BLEU: 0.0, chr-F: 0.175\ntestset: URL, BLEU: 1.6, chr-F: 0.091\ntestset: URL, BLEU: 1.1, chr-F: 0.254\ntestset: URL, BLEU: 30.4, chr-F: 0.526\ntestset: URL, BLEU: 12.4, chr-F: 0.116\ntestset: URL, BLEU: 43.4, chr-F: 0.637\ntestset: URL, BLEU: 47.1, chr-F: 0.607\ntestset: URL, BLEU: 0.6, chr-F: 0.181\ntestset: URL, BLEU: 30.2, chr-F: 0.587\ntestset: URL, BLEU: 3.1, chr-F: 0.173\ntestset: URL, BLEU: 1.8, chr-F: 0.215\ntestset: URL, BLEU: 0.0, chr-F: 0.045\ntestset: URL, BLEU: 4.1, chr-F: 0.236\ntestset: URL, BLEU: 19.6, chr-F: 0.406\ntestset: URL, BLEU: 15.1, chr-F: 0.329\ntestset: URL, BLEU: 5.8, chr-F: 0.271\ntestset: URL, BLEU: 19.0, chr-F: 0.136\ntestset: URL, BLEU: 1.3, chr-F: 0.119\ntestset: URL, BLEU: 17.1, chr-F: 0.388\ntestset: URL, BLEU: 16.8, chr-F: 0.356\ntestset: URL, BLEU: 3.6, chr-F: 0.174\ntestset: URL, BLEU: 4.7, chr-F: 0.225\ntestset: URL, BLEU: 16.3, chr-F: 0.406\ntestset: URL, BLEU: 0.7, chr-F: 0.154\ntestset: URL, BLEU: 8.6, chr-F: 0.319\ntestset: URL, BLEU: 4.4, chr-F: 0.165\ntestset: URL, BLEU: 0.2, chr-F: 0.041\ntestset: URL, BLEU: 0.2, chr-F: 0.068\ntestset: URL, BLEU: 0.6, chr-F: 0.000\ntestset: URL, BLEU: 15.9, chr-F: 0.373\ntestset: URL, BLEU: 14.7, chr-F: 0.320\ntestset: URL, BLEU: 38.0, chr-F: 0.641\ntestset: URL, BLEU: 0.0, chr-F: 0.037\ntestset: URL, BLEU: 67.7, chr-F: 0.836\ntestset: URL, BLEU: 42.6, chr-F: 0.614\ntestset: URL, BLEU: 43.5, chr-F: 0.610\ntestset: URL, BLEU: 12.4, chr-F: 0.123\ntestset: URL, BLEU: 15.6, chr-F: 0.176\ntestset: URL, BLEU: 7.1, chr-F: 0.257\ntestset: URL, BLEU: 53.5, chr-F: 0.690\ntestset: URL, BLEU: 10.7, chr-F: 0.176\ntestset: URL, BLEU: 67.7, chr-F: 0.818\ntestset: URL, BLEU: 11.8, chr-F: 0.393\ntestset: URL, BLEU: 4.0, chr-F: 0.239\ntestset: URL, BLEU: 9.5, chr-F: 0.085\ntestset: URL, BLEU: 36.5, chr-F: 0.529\ntestset: URL, BLEU: 0.0, chr-F: 0.043\ntestset: URL, BLEU: 80.6, chr-F: 0.722\ntestset: URL, BLEU: 40.1, chr-F: 0.581\ntestset: URL, BLEU: 36.1, chr-F: 0.511\ntestset: URL, BLEU: 16.5, chr-F: 0.524\ntestset: URL, BLEU: 0.7, chr-F: 0.118\ntestset: URL, BLEU: 40.4, chr-F: 0.535\ntestset: URL, BLEU: 19.1, chr-F: 0.582\ntestset: URL, BLEU: 2.4, chr-F: 0.093\ntestset: URL, BLEU: 25.9, chr-F: 0.430\ntestset: URL, BLEU: 1.5, chr-F: 0.160\ntestset: URL, BLEU: 42.7, chr-F: 0.614\ntestset: URL, BLEU: 23.0, chr-F: 0.465\ntestset: URL, BLEU: 39.8, chr-F: 0.610\ntestset: URL, BLEU: 32.0, chr-F: 0.520\ntestset: URL, BLEU: 3.9, chr-F: 0.156\ntestset: URL, BLEU: 10.7, chr-F: 0.127\ntestset: URL, BLEU: 10.7, chr-F: 0.231\ntestset: URL, BLEU: 0.8, chr-F: 0.157\ntestset: URL, BLEU: 44.1, chr-F: 0.634\ntestset: URL, BLEU: 47.1, chr-F: 0.665\ntestset: URL, BLEU: 0.5, chr-F: 0.166\ntestset: URL, BLEU: 12.7, chr-F: 0.337\ntestset: URL, BLEU: 58.4, chr-F: 0.748\ntestset: URL, BLEU: 61.3, chr-F: 0.753\ntestset: URL, BLEU: 48.2, chr-F: 0.670\ntestset: URL, BLEU: 52.8, chr-F: 0.690\ntestset: URL, BLEU: 5.7, chr-F: 0.178\ntestset: URL, BLEU: 0.9, chr-F: 0.159\ntestset: URL, BLEU: 23.0, chr-F: 0.467\ntestset: URL, BLEU: 1.0, chr-F: 0.165\ntestset: URL, BLEU: 14.4, chr-F: 0.310\ntestset: URL, BLEU: 24.1, chr-F: 0.485\ntestset: URL, BLEU: 53.6, chr-F: 0.705\ntestset: URL, BLEU: 15.0, chr-F: 0.415\ntestset: URL, BLEU: 0.5, chr-F: 0.183\ntestset: URL, BLEU: 73.6, chr-F: 0.842\ntestset: URL, BLEU: 4.2, chr-F: 0.191\ntestset: URL, BLEU: 9.4, chr-F: 0.299\ntestset: URL, BLEU: 27.7, chr-F: 0.501\ntestset: URL, BLEU: 48.2, chr-F: 0.687\ntestset: URL, BLEU: 69.5, chr-F: 0.820\ntestset: URL, BLEU: 41.1, chr-F: 0.634\ntestset: URL, BLEU: 49.4, chr-F: 0.660\ntestset: URL, BLEU: 6.8, chr-F: 0.230\ntestset: URL, BLEU: 6.9, chr-F: 0.395\ntestset: URL, BLEU: 9.2, chr-F: 0.323\ntestset: URL, BLEU: 1.5, chr-F: 0.000\ntestset: URL, BLEU: 34.5, chr-F: 0.555\ntestset: URL, BLEU: 22.1, chr-F: 0.447\ntestset: URL, BLEU: 34.3, chr-F: 0.565\ntestset: URL, BLEU: 50.5, chr-F: 0.676\ntestset: URL, BLEU: 57.6, chr-F: 0.764\ntestset: URL, BLEU: 68.9, chr-F: 0.813\ntestset: URL, BLEU: 65.0, chr-F: 0.627\ntestset: URL, BLEU: 43.5, chr-F: 0.559\ntestset: URL, BLEU: 26.1, chr-F: 0.471\ntestset: URL, BLEU: 7.1, chr-F: 0.295\ntestset: URL, BLEU: 34.4, chr-F: 0.551\ntestset: URL, BLEU: 9.9, chr-F: 0.438\ntestset: URL, BLEU: 8.6, chr-F: 0.385\ntestset: URL, BLEU: 21.8, chr-F: 0.431\ntestset: URL, BLEU: 2.1, chr-F: 0.111\ntestset: URL, BLEU: 7.6, chr-F: 0.267\ntestset: URL, BLEU: 0.7, chr-F: 0.198\ntestset: URL, BLEU: 16.0, chr-F: 0.121\ntestset: URL, BLEU: 3.8, chr-F: 0.150\ntestset: URL, BLEU: 14.6, chr-F: 0.375\ntestset: URL, BLEU: 2.4, chr-F: 0.096\ntestset: URL, BLEU: 51.8, chr-F: 0.802\ntestset: URL, BLEU: 64.9, chr-F: 0.784\ntestset: URL, BLEU: 47.0, chr-F: 0.657\ntestset: URL, BLEU: 55.8, chr-F: 0.700\ntestset: URL, BLEU: 0.0, chr-F: 0.060\ntestset: URL, BLEU: 14.1, chr-F: 0.449\ntestset: URL, BLEU: 7.5, chr-F: 0.291\ntestset: URL, BLEU: 70.7, chr-F: 0.812\ntestset: URL, BLEU: 15.9, chr-F: 0.553\ntestset: URL, BLEU: 78.7, chr-F: 0.854\ntestset: URL, BLEU: 67.1, chr-F: 0.799\ntestset: URL, BLEU: 14.7, chr-F: 0.156\ntestset: URL, BLEU: 7.7, chr-F: 0.341\ntestset: URL, BLEU: 8.0, chr-F: 0.334\ntestset: URL, BLEU: 12.4, chr-F: 0.305\ntestset: URL, BLEU: 1.1, chr-F: 0.209\ntestset: URL, BLEU: 4.9, chr-F: 0.244\ntestset: URL, BLEU: 3.4, chr-F: 0.194\ntestset: URL, BLEU: 23.6, chr-F: 0.552\ntestset: URL, BLEU: 0.1, chr-F: 0.066\ntestset: URL, BLEU: 17.5, chr-F: 0.392\ntestset: URL, BLEU: 21.0, chr-F: 0.423\ntestset: URL, BLEU: 17.4, chr-F: 0.368\ntestset: URL, BLEU: 0.6, chr-F: 0.143\ntestset: URL, BLEU: 5.3, chr-F: 0.169\ntestset: URL, BLEU: 1.2, chr-F: 0.149\ntestset: URL, BLEU: 3.5, chr-F: 0.256\ntestset: URL, BLEU: 14.4, chr-F: 0.487\ntestset: URL, BLEU: 26.1, chr-F: 0.423\ntestset: URL, BLEU: 47.1, chr-F: 0.583\ntestset: URL, BLEU: 1.5, chr-F: 0.092\ntestset: URL, BLEU: 35.9, chr-F: 0.518\ntestset: URL, BLEU: 1.0, chr-F: 0.124",
"### System Info:\n\n\n* hf\\_name: gem-gem\n* source\\_languages: gem\n* target\\_languages: gem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']\n* src\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* tgt\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gem\n* tgt\\_alpha3: gem\n* short\\_pair: gem-gem\n* chrF2\\_score: 0.614\n* bleu: 42.7\n* brevity\\_penalty: 0.993\n* ref\\_len: 73459.0\n* src\\_name: Germanic languages\n* tgt\\_name: Germanic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: gem\n* tgt\\_alpha2: gem\n* prefer\\_old: False\n* long\\_pair: gem-gem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #da #sv #af #nn #fy #fo #de #nb #nl #is #en #lb #yi #gem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### gem-gem\n\n\n* source group: Germanic languages\n* target group: Germanic languages\n* OPUS readme: gem-gem\n* model: transformer\n* source language(s): afr ang\\_Latn dan deu eng enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* target language(s): afr ang\\_Latn dan deu eng enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.519\ntestset: URL, BLEU: 18.7, chr-F: 0.495\ntestset: URL, BLEU: 22.8, chr-F: 0.509\ntestset: URL, BLEU: 18.6, chr-F: 0.485\ntestset: URL, BLEU: 22.2, chr-F: 0.507\ntestset: URL, BLEU: 18.3, chr-F: 0.491\ntestset: URL, BLEU: 24.8, chr-F: 0.537\ntestset: URL, BLEU: 19.7, chr-F: 0.499\ntestset: URL, BLEU: 22.9, chr-F: 0.516\ntestset: URL, BLEU: 18.3, chr-F: 0.485\ntestset: URL, BLEU: 23.9, chr-F: 0.524\ntestset: URL, BLEU: 18.5, chr-F: 0.484\ntestset: URL, BLEU: 26.3, chr-F: 0.537\ntestset: URL, BLEU: 21.5, chr-F: 0.506\ntestset: URL, BLEU: 25.7, chr-F: 0.535\ntestset: URL, BLEU: 27.3, chr-F: 0.542\ntestset: URL, BLEU: 24.2, chr-F: 0.534\ntestset: URL, BLEU: 31.8, chr-F: 0.584\ntestset: URL, BLEU: 28.4, chr-F: 0.564\ntestset: URL, BLEU: 27.6, chr-F: 0.545\ntestset: URL, BLEU: 22.8, chr-F: 0.527\ntestset: URL, BLEU: 34.1, chr-F: 0.593\ntestset: URL, BLEU: 32.7, chr-F: 0.595\ntestset: URL, BLEU: 30.6, chr-F: 0.565\ntestset: URL, BLEU: 29.5, chr-F: 0.567\ntestset: URL, BLEU: 0.0, chr-F: 0.053\ntestset: URL, BLEU: 57.8, chr-F: 0.907\ntestset: URL, BLEU: 46.4, chr-F: 0.663\ntestset: URL, BLEU: 57.4, chr-F: 0.717\ntestset: URL, BLEU: 11.3, chr-F: 0.285\ntestset: URL, BLEU: 0.0, chr-F: 0.167\ntestset: URL, BLEU: 1.5, chr-F: 0.178\ntestset: URL, BLEU: 29.0, chr-F: 0.760\ntestset: URL, BLEU: 11.2, chr-F: 0.246\ntestset: URL, BLEU: 53.3, chr-F: 0.708\ntestset: URL, BLEU: 66.0, chr-F: 0.752\ntestset: URL, BLEU: 88.0, chr-F: 0.955\ntestset: URL, BLEU: 59.5, chr-F: 0.443\ntestset: URL, BLEU: 10.7, chr-F: 0.043\ntestset: URL, BLEU: 6.3, chr-F: 0.190\ntestset: URL, BLEU: 1.4, chr-F: 0.212\ntestset: URL, BLEU: 8.1, chr-F: 0.247\ntestset: URL, BLEU: 1.7, chr-F: 0.196\ntestset: URL, BLEU: 10.7, chr-F: 0.105\ntestset: URL, BLEU: 10.7, chr-F: 0.128\ntestset: URL, BLEU: 16.0, chr-F: 0.135\ntestset: URL, BLEU: 16.0, chr-F: 0.121\ntestset: URL, BLEU: 1.5, chr-F: 0.136\ntestset: URL, BLEU: 22.7, chr-F: 0.655\ntestset: URL, BLEU: 3.1, chr-F: 0.110\ntestset: URL, BLEU: 47.4, chr-F: 0.676\ntestset: URL, BLEU: 54.7, chr-F: 0.704\ntestset: URL, BLEU: 4.8, chr-F: 0.291\ntestset: URL, BLEU: 9.7, chr-F: 0.120\ntestset: URL, BLEU: 3.8, chr-F: 0.240\ntestset: URL, BLEU: 66.1, chr-F: 0.678\ntestset: URL, BLEU: 78.3, chr-F: 0.563\ntestset: URL, BLEU: 6.2, chr-F: 0.335\ntestset: URL, BLEU: 60.0, chr-F: 0.748\ntestset: URL, BLEU: 68.1, chr-F: 0.812\ntestset: URL, BLEU: 65.0, chr-F: 0.785\ntestset: URL, BLEU: 2.6, chr-F: 0.182\ntestset: URL, BLEU: 9.3, chr-F: 0.226\ntestset: URL, BLEU: 50.3, chr-F: 0.682\ntestset: URL, BLEU: 0.5, chr-F: 0.118\ntestset: URL, BLEU: 49.6, chr-F: 0.679\ntestset: URL, BLEU: 43.4, chr-F: 0.618\ntestset: URL, BLEU: 2.2, chr-F: 0.159\ntestset: URL, BLEU: 0.4, chr-F: 0.156\ntestset: URL, BLEU: 10.7, chr-F: 0.355\ntestset: URL, BLEU: 0.7, chr-F: 0.183\ntestset: URL, BLEU: 0.3, chr-F: 0.010\ntestset: URL, BLEU: 1.1, chr-F: 0.130\ntestset: URL, BLEU: 24.3, chr-F: 0.504\ntestset: URL, BLEU: 0.9, chr-F: 0.173\ntestset: URL, BLEU: 15.6, chr-F: 0.304\ntestset: URL, BLEU: 21.2, chr-F: 0.469\ntestset: URL, BLEU: 47.1, chr-F: 0.657\ntestset: URL, BLEU: 43.9, chr-F: 0.646\ntestset: URL, BLEU: 3.0, chr-F: 0.133\ntestset: URL, BLEU: 12.0, chr-F: 0.296\ntestset: URL, BLEU: 0.6, chr-F: 0.137\ntestset: URL, BLEU: 50.6, chr-F: 0.668\ntestset: URL, BLEU: 0.2, chr-F: 0.137\ntestset: URL, BLEU: 3.9, chr-F: 0.229\ntestset: URL, BLEU: 55.2, chr-F: 0.721\ntestset: URL, BLEU: 4.9, chr-F: 0.118\ntestset: URL, BLEU: 52.6, chr-F: 0.684\ntestset: URL, BLEU: 35.4, chr-F: 0.573\ntestset: URL, BLEU: 1.8, chr-F: 0.223\ntestset: URL, BLEU: 7.0, chr-F: 0.312\ntestset: URL, BLEU: 1.2, chr-F: 0.050\ntestset: URL, BLEU: 15.8, chr-F: 0.381\ntestset: URL, BLEU: 0.7, chr-F: 0.170\ntestset: URL, BLEU: 0.3, chr-F: 0.011\ntestset: URL, BLEU: 0.5, chr-F: 0.126\ntestset: URL, BLEU: 20.9, chr-F: 0.463\ntestset: URL, BLEU: 1.0, chr-F: 0.141\ntestset: URL, BLEU: 12.8, chr-F: 0.292\ntestset: URL, BLEU: 18.3, chr-F: 0.428\ntestset: URL, BLEU: 47.3, chr-F: 0.657\ntestset: URL, BLEU: 0.3, chr-F: 0.145\ntestset: URL, BLEU: 47.2, chr-F: 0.650\ntestset: URL, BLEU: 4.8, chr-F: 0.177\ntestset: URL, BLEU: 38.1, chr-F: 0.597\ntestset: URL, BLEU: 2.4, chr-F: 0.288\ntestset: URL, BLEU: 52.7, chr-F: 0.677\ntestset: URL, BLEU: 1.1, chr-F: 0.163\ntestset: URL, BLEU: 4.5, chr-F: 0.223\ntestset: URL, BLEU: 22.8, chr-F: 0.401\ntestset: URL, BLEU: 0.4, chr-F: 0.062\ntestset: URL, BLEU: 51.4, chr-F: 0.782\ntestset: URL, BLEU: 33.8, chr-F: 0.473\ntestset: URL, BLEU: 22.4, chr-F: 0.495\ntestset: URL, BLEU: 16.0, chr-F: 0.173\ntestset: URL, BLEU: 6.1, chr-F: 0.222\ntestset: URL, BLEU: 59.5, chr-F: 0.651\ntestset: URL, BLEU: 10.5, chr-F: 0.130\ntestset: URL, BLEU: 18.1, chr-F: 0.327\ntestset: URL, BLEU: 38.3, chr-F: 0.546\ntestset: URL, BLEU: 15.6, chr-F: 0.290\ntestset: URL, BLEU: 2.3, chr-F: 0.215\ntestset: URL, BLEU: 2.1, chr-F: 0.035\ntestset: URL, BLEU: 53.7, chr-F: 0.625\ntestset: URL, BLEU: 24.7, chr-F: 0.435\ntestset: URL, BLEU: 12.7, chr-F: 0.116\ntestset: URL, BLEU: 26.3, chr-F: 0.341\ntestset: URL, BLEU: 41.9, chr-F: 0.586\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 7.4, chr-F: 0.263\ntestset: URL, BLEU: 7.0, chr-F: 0.157\ntestset: URL, BLEU: 4.0, chr-F: 0.112\ntestset: URL, BLEU: 1.0, chr-F: 0.135\ntestset: URL, BLEU: 12.4, chr-F: 0.207\ntestset: URL, BLEU: 10.6, chr-F: 0.227\ntestset: URL, BLEU: 1.0, chr-F: 0.058\ntestset: URL, BLEU: 12.7, chr-F: 0.333\ntestset: URL, BLEU: 30.8, chr-F: 0.555\ntestset: URL, BLEU: 31.2, chr-F: 0.506\ntestset: URL, BLEU: 0.0, chr-F: 0.175\ntestset: URL, BLEU: 1.6, chr-F: 0.091\ntestset: URL, BLEU: 1.1, chr-F: 0.254\ntestset: URL, BLEU: 30.4, chr-F: 0.526\ntestset: URL, BLEU: 12.4, chr-F: 0.116\ntestset: URL, BLEU: 43.4, chr-F: 0.637\ntestset: URL, BLEU: 47.1, chr-F: 0.607\ntestset: URL, BLEU: 0.6, chr-F: 0.181\ntestset: URL, BLEU: 30.2, chr-F: 0.587\ntestset: URL, BLEU: 3.1, chr-F: 0.173\ntestset: URL, BLEU: 1.8, chr-F: 0.215\ntestset: URL, BLEU: 0.0, chr-F: 0.045\ntestset: URL, BLEU: 4.1, chr-F: 0.236\ntestset: URL, BLEU: 19.6, chr-F: 0.406\ntestset: URL, BLEU: 15.1, chr-F: 0.329\ntestset: URL, BLEU: 5.8, chr-F: 0.271\ntestset: URL, BLEU: 19.0, chr-F: 0.136\ntestset: URL, BLEU: 1.3, chr-F: 0.119\ntestset: URL, BLEU: 17.1, chr-F: 0.388\ntestset: URL, BLEU: 16.8, chr-F: 0.356\ntestset: URL, BLEU: 3.6, chr-F: 0.174\ntestset: URL, BLEU: 4.7, chr-F: 0.225\ntestset: URL, BLEU: 16.3, chr-F: 0.406\ntestset: URL, BLEU: 0.7, chr-F: 0.154\ntestset: URL, BLEU: 8.6, chr-F: 0.319\ntestset: URL, BLEU: 4.4, chr-F: 0.165\ntestset: URL, BLEU: 0.2, chr-F: 0.041\ntestset: URL, BLEU: 0.2, chr-F: 0.068\ntestset: URL, BLEU: 0.6, chr-F: 0.000\ntestset: URL, BLEU: 15.9, chr-F: 0.373\ntestset: URL, BLEU: 14.7, chr-F: 0.320\ntestset: URL, BLEU: 38.0, chr-F: 0.641\ntestset: URL, BLEU: 0.0, chr-F: 0.037\ntestset: URL, BLEU: 67.7, chr-F: 0.836\ntestset: URL, BLEU: 42.6, chr-F: 0.614\ntestset: URL, BLEU: 43.5, chr-F: 0.610\ntestset: URL, BLEU: 12.4, chr-F: 0.123\ntestset: URL, BLEU: 15.6, chr-F: 0.176\ntestset: URL, BLEU: 7.1, chr-F: 0.257\ntestset: URL, BLEU: 53.5, chr-F: 0.690\ntestset: URL, BLEU: 10.7, chr-F: 0.176\ntestset: URL, BLEU: 67.7, chr-F: 0.818\ntestset: URL, BLEU: 11.8, chr-F: 0.393\ntestset: URL, BLEU: 4.0, chr-F: 0.239\ntestset: URL, BLEU: 9.5, chr-F: 0.085\ntestset: URL, BLEU: 36.5, chr-F: 0.529\ntestset: URL, BLEU: 0.0, chr-F: 0.043\ntestset: URL, BLEU: 80.6, chr-F: 0.722\ntestset: URL, BLEU: 40.1, chr-F: 0.581\ntestset: URL, BLEU: 36.1, chr-F: 0.511\ntestset: URL, BLEU: 16.5, chr-F: 0.524\ntestset: URL, BLEU: 0.7, chr-F: 0.118\ntestset: URL, BLEU: 40.4, chr-F: 0.535\ntestset: URL, BLEU: 19.1, chr-F: 0.582\ntestset: URL, BLEU: 2.4, chr-F: 0.093\ntestset: URL, BLEU: 25.9, chr-F: 0.430\ntestset: URL, BLEU: 1.5, chr-F: 0.160\ntestset: URL, BLEU: 42.7, chr-F: 0.614\ntestset: URL, BLEU: 23.0, chr-F: 0.465\ntestset: URL, BLEU: 39.8, chr-F: 0.610\ntestset: URL, BLEU: 32.0, chr-F: 0.520\ntestset: URL, BLEU: 3.9, chr-F: 0.156\ntestset: URL, BLEU: 10.7, chr-F: 0.127\ntestset: URL, BLEU: 10.7, chr-F: 0.231\ntestset: URL, BLEU: 0.8, chr-F: 0.157\ntestset: URL, BLEU: 44.1, chr-F: 0.634\ntestset: URL, BLEU: 47.1, chr-F: 0.665\ntestset: URL, BLEU: 0.5, chr-F: 0.166\ntestset: URL, BLEU: 12.7, chr-F: 0.337\ntestset: URL, BLEU: 58.4, chr-F: 0.748\ntestset: URL, BLEU: 61.3, chr-F: 0.753\ntestset: URL, BLEU: 48.2, chr-F: 0.670\ntestset: URL, BLEU: 52.8, chr-F: 0.690\ntestset: URL, BLEU: 5.7, chr-F: 0.178\ntestset: URL, BLEU: 0.9, chr-F: 0.159\ntestset: URL, BLEU: 23.0, chr-F: 0.467\ntestset: URL, BLEU: 1.0, chr-F: 0.165\ntestset: URL, BLEU: 14.4, chr-F: 0.310\ntestset: URL, BLEU: 24.1, chr-F: 0.485\ntestset: URL, BLEU: 53.6, chr-F: 0.705\ntestset: URL, BLEU: 15.0, chr-F: 0.415\ntestset: URL, BLEU: 0.5, chr-F: 0.183\ntestset: URL, BLEU: 73.6, chr-F: 0.842\ntestset: URL, BLEU: 4.2, chr-F: 0.191\ntestset: URL, BLEU: 9.4, chr-F: 0.299\ntestset: URL, BLEU: 27.7, chr-F: 0.501\ntestset: URL, BLEU: 48.2, chr-F: 0.687\ntestset: URL, BLEU: 69.5, chr-F: 0.820\ntestset: URL, BLEU: 41.1, chr-F: 0.634\ntestset: URL, BLEU: 49.4, chr-F: 0.660\ntestset: URL, BLEU: 6.8, chr-F: 0.230\ntestset: URL, BLEU: 6.9, chr-F: 0.395\ntestset: URL, BLEU: 9.2, chr-F: 0.323\ntestset: URL, BLEU: 1.5, chr-F: 0.000\ntestset: URL, BLEU: 34.5, chr-F: 0.555\ntestset: URL, BLEU: 22.1, chr-F: 0.447\ntestset: URL, BLEU: 34.3, chr-F: 0.565\ntestset: URL, BLEU: 50.5, chr-F: 0.676\ntestset: URL, BLEU: 57.6, chr-F: 0.764\ntestset: URL, BLEU: 68.9, chr-F: 0.813\ntestset: URL, BLEU: 65.0, chr-F: 0.627\ntestset: URL, BLEU: 43.5, chr-F: 0.559\ntestset: URL, BLEU: 26.1, chr-F: 0.471\ntestset: URL, BLEU: 7.1, chr-F: 0.295\ntestset: URL, BLEU: 34.4, chr-F: 0.551\ntestset: URL, BLEU: 9.9, chr-F: 0.438\ntestset: URL, BLEU: 8.6, chr-F: 0.385\ntestset: URL, BLEU: 21.8, chr-F: 0.431\ntestset: URL, BLEU: 2.1, chr-F: 0.111\ntestset: URL, BLEU: 7.6, chr-F: 0.267\ntestset: URL, BLEU: 0.7, chr-F: 0.198\ntestset: URL, BLEU: 16.0, chr-F: 0.121\ntestset: URL, BLEU: 3.8, chr-F: 0.150\ntestset: URL, BLEU: 14.6, chr-F: 0.375\ntestset: URL, BLEU: 2.4, chr-F: 0.096\ntestset: URL, BLEU: 51.8, chr-F: 0.802\ntestset: URL, BLEU: 64.9, chr-F: 0.784\ntestset: URL, BLEU: 47.0, chr-F: 0.657\ntestset: URL, BLEU: 55.8, chr-F: 0.700\ntestset: URL, BLEU: 0.0, chr-F: 0.060\ntestset: URL, BLEU: 14.1, chr-F: 0.449\ntestset: URL, BLEU: 7.5, chr-F: 0.291\ntestset: URL, BLEU: 70.7, chr-F: 0.812\ntestset: URL, BLEU: 15.9, chr-F: 0.553\ntestset: URL, BLEU: 78.7, chr-F: 0.854\ntestset: URL, BLEU: 67.1, chr-F: 0.799\ntestset: URL, BLEU: 14.7, chr-F: 0.156\ntestset: URL, BLEU: 7.7, chr-F: 0.341\ntestset: URL, BLEU: 8.0, chr-F: 0.334\ntestset: URL, BLEU: 12.4, chr-F: 0.305\ntestset: URL, BLEU: 1.1, chr-F: 0.209\ntestset: URL, BLEU: 4.9, chr-F: 0.244\ntestset: URL, BLEU: 3.4, chr-F: 0.194\ntestset: URL, BLEU: 23.6, chr-F: 0.552\ntestset: URL, BLEU: 0.1, chr-F: 0.066\ntestset: URL, BLEU: 17.5, chr-F: 0.392\ntestset: URL, BLEU: 21.0, chr-F: 0.423\ntestset: URL, BLEU: 17.4, chr-F: 0.368\ntestset: URL, BLEU: 0.6, chr-F: 0.143\ntestset: URL, BLEU: 5.3, chr-F: 0.169\ntestset: URL, BLEU: 1.2, chr-F: 0.149\ntestset: URL, BLEU: 3.5, chr-F: 0.256\ntestset: URL, BLEU: 14.4, chr-F: 0.487\ntestset: URL, BLEU: 26.1, chr-F: 0.423\ntestset: URL, BLEU: 47.1, chr-F: 0.583\ntestset: URL, BLEU: 1.5, chr-F: 0.092\ntestset: URL, BLEU: 35.9, chr-F: 0.518\ntestset: URL, BLEU: 1.0, chr-F: 0.124",
"### System Info:\n\n\n* hf\\_name: gem-gem\n* source\\_languages: gem\n* target\\_languages: gem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']\n* src\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* tgt\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gem\n* tgt\\_alpha3: gem\n* short\\_pair: gem-gem\n* chrF2\\_score: 0.614\n* bleu: 42.7\n* brevity\\_penalty: 0.993\n* ref\\_len: 73459.0\n* src\\_name: Germanic languages\n* tgt\\_name: Germanic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: gem\n* tgt\\_alpha2: gem\n* prefer\\_old: False\n* long\\_pair: gem-gem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
79,
6721,
740
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #da #sv #af #nn #fy #fo #de #nb #nl #is #en #lb #yi #gem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### gem-gem\n\n\n* source group: Germanic languages\n* target group: Germanic languages\n* OPUS readme: gem-gem\n* model: transformer\n* source language(s): afr ang\\_Latn dan deu eng enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* target language(s): afr ang\\_Latn dan deu eng enm\\_Latn fao frr fry gos got\\_Goth gsw isl ksh ltz nds nld nno nob nob\\_Hebr non\\_Latn pdc sco stq swe swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.519\ntestset: URL, BLEU: 18.7, chr-F: 0.495\ntestset: URL, BLEU: 22.8, chr-F: 0.509\ntestset: URL, BLEU: 18.6, chr-F: 0.485\ntestset: URL, BLEU: 22.2, chr-F: 0.507\ntestset: URL, BLEU: 18.3, chr-F: 0.491\ntestset: URL, BLEU: 24.8, chr-F: 0.537\ntestset: URL, BLEU: 19.7, chr-F: 0.499\ntestset: URL, BLEU: 22.9, chr-F: 0.516\ntestset: URL, BLEU: 18.3, chr-F: 0.485\ntestset: URL, BLEU: 23.9, chr-F: 0.524\ntestset: URL, BLEU: 18.5, chr-F: 0.484\ntestset: URL, BLEU: 26.3, chr-F: 0.537\ntestset: URL, BLEU: 21.5, chr-F: 0.506\ntestset: URL, BLEU: 25.7, chr-F: 0.535\ntestset: URL, BLEU: 27.3, chr-F: 0.542\ntestset: URL, BLEU: 24.2, chr-F: 0.534\ntestset: URL, BLEU: 31.8, chr-F: 0.584\ntestset: URL, BLEU: 28.4, chr-F: 0.564\ntestset: URL, BLEU: 27.6, chr-F: 0.545\ntestset: URL, BLEU: 22.8, chr-F: 0.527\ntestset: URL, BLEU: 34.1, chr-F: 0.593\ntestset: URL, BLEU: 32.7, chr-F: 0.595\ntestset: URL, BLEU: 30.6, chr-F: 0.565\ntestset: URL, BLEU: 29.5, chr-F: 0.567\ntestset: URL, BLEU: 0.0, chr-F: 0.053\ntestset: URL, BLEU: 57.8, chr-F: 0.907\ntestset: URL, BLEU: 46.4, chr-F: 0.663\ntestset: URL, BLEU: 57.4, chr-F: 0.717\ntestset: URL, BLEU: 11.3, chr-F: 0.285\ntestset: URL, BLEU: 0.0, chr-F: 0.167\ntestset: URL, BLEU: 1.5, chr-F: 0.178\ntestset: URL, BLEU: 29.0, chr-F: 0.760\ntestset: URL, BLEU: 11.2, chr-F: 0.246\ntestset: URL, BLEU: 53.3, chr-F: 0.708\ntestset: URL, BLEU: 66.0, chr-F: 0.752\ntestset: URL, BLEU: 88.0, chr-F: 0.955\ntestset: URL, BLEU: 59.5, chr-F: 0.443\ntestset: URL, BLEU: 10.7, chr-F: 0.043\ntestset: URL, BLEU: 6.3, chr-F: 0.190\ntestset: URL, BLEU: 1.4, chr-F: 0.212\ntestset: URL, BLEU: 8.1, chr-F: 0.247\ntestset: URL, BLEU: 1.7, chr-F: 0.196\ntestset: URL, BLEU: 10.7, chr-F: 0.105\ntestset: URL, BLEU: 10.7, chr-F: 0.128\ntestset: URL, BLEU: 16.0, chr-F: 0.135\ntestset: URL, BLEU: 16.0, chr-F: 0.121\ntestset: URL, BLEU: 1.5, chr-F: 0.136\ntestset: URL, BLEU: 22.7, chr-F: 0.655\ntestset: URL, BLEU: 3.1, chr-F: 0.110\ntestset: URL, BLEU: 47.4, chr-F: 0.676\ntestset: URL, BLEU: 54.7, chr-F: 0.704\ntestset: URL, BLEU: 4.8, chr-F: 0.291\ntestset: URL, BLEU: 9.7, chr-F: 0.120\ntestset: URL, BLEU: 3.8, chr-F: 0.240\ntestset: URL, BLEU: 66.1, chr-F: 0.678\ntestset: URL, BLEU: 78.3, chr-F: 0.563\ntestset: URL, BLEU: 6.2, chr-F: 0.335\ntestset: URL, BLEU: 60.0, chr-F: 0.748\ntestset: URL, BLEU: 68.1, chr-F: 0.812\ntestset: URL, BLEU: 65.0, chr-F: 0.785\ntestset: URL, BLEU: 2.6, chr-F: 0.182\ntestset: URL, BLEU: 9.3, chr-F: 0.226\ntestset: URL, BLEU: 50.3, chr-F: 0.682\ntestset: URL, BLEU: 0.5, chr-F: 0.118\ntestset: URL, BLEU: 49.6, chr-F: 0.679\ntestset: URL, BLEU: 43.4, chr-F: 0.618\ntestset: URL, BLEU: 2.2, chr-F: 0.159\ntestset: URL, BLEU: 0.4, chr-F: 0.156\ntestset: URL, BLEU: 10.7, chr-F: 0.355\ntestset: URL, BLEU: 0.7, chr-F: 0.183\ntestset: URL, BLEU: 0.3, chr-F: 0.010\ntestset: URL, BLEU: 1.1, chr-F: 0.130\ntestset: URL, BLEU: 24.3, chr-F: 0.504\ntestset: URL, BLEU: 0.9, chr-F: 0.173\ntestset: URL, BLEU: 15.6, chr-F: 0.304\ntestset: URL, BLEU: 21.2, chr-F: 0.469\ntestset: URL, BLEU: 47.1, chr-F: 0.657\ntestset: URL, BLEU: 43.9, chr-F: 0.646\ntestset: URL, BLEU: 3.0, chr-F: 0.133\ntestset: URL, BLEU: 12.0, chr-F: 0.296\ntestset: URL, BLEU: 0.6, chr-F: 0.137\ntestset: URL, BLEU: 50.6, chr-F: 0.668\ntestset: URL, BLEU: 0.2, chr-F: 0.137\ntestset: URL, BLEU: 3.9, chr-F: 0.229\ntestset: URL, BLEU: 55.2, chr-F: 0.721\ntestset: URL, BLEU: 4.9, chr-F: 0.118\ntestset: URL, BLEU: 52.6, chr-F: 0.684\ntestset: URL, BLEU: 35.4, chr-F: 0.573\ntestset: URL, BLEU: 1.8, chr-F: 0.223\ntestset: URL, BLEU: 7.0, chr-F: 0.312\ntestset: URL, BLEU: 1.2, chr-F: 0.050\ntestset: URL, BLEU: 15.8, chr-F: 0.381\ntestset: URL, BLEU: 0.7, chr-F: 0.170\ntestset: URL, BLEU: 0.3, chr-F: 0.011\ntestset: URL, BLEU: 0.5, chr-F: 0.126\ntestset: URL, BLEU: 20.9, chr-F: 0.463\ntestset: URL, BLEU: 1.0, chr-F: 0.141\ntestset: URL, BLEU: 12.8, chr-F: 0.292\ntestset: URL, BLEU: 18.3, chr-F: 0.428\ntestset: URL, BLEU: 47.3, chr-F: 0.657\ntestset: URL, BLEU: 0.3, chr-F: 0.145\ntestset: URL, BLEU: 47.2, chr-F: 0.650\ntestset: URL, BLEU: 4.8, chr-F: 0.177\ntestset: URL, BLEU: 38.1, chr-F: 0.597\ntestset: URL, BLEU: 2.4, chr-F: 0.288\ntestset: URL, BLEU: 52.7, chr-F: 0.677\ntestset: URL, BLEU: 1.1, chr-F: 0.163\ntestset: URL, BLEU: 4.5, chr-F: 0.223\ntestset: URL, BLEU: 22.8, chr-F: 0.401\ntestset: URL, BLEU: 0.4, chr-F: 0.062\ntestset: URL, BLEU: 51.4, chr-F: 0.782\ntestset: URL, BLEU: 33.8, chr-F: 0.473\ntestset: URL, BLEU: 22.4, chr-F: 0.495\ntestset: URL, BLEU: 16.0, chr-F: 0.173\ntestset: URL, BLEU: 6.1, chr-F: 0.222\ntestset: URL, BLEU: 59.5, chr-F: 0.651\ntestset: URL, BLEU: 10.5, chr-F: 0.130\ntestset: URL, BLEU: 18.1, chr-F: 0.327\ntestset: URL, BLEU: 38.3, chr-F: 0.546\ntestset: URL, BLEU: 15.6, chr-F: 0.290\ntestset: URL, BLEU: 2.3, chr-F: 0.215\ntestset: URL, BLEU: 2.1, chr-F: 0.035\ntestset: URL, BLEU: 53.7, chr-F: 0.625\ntestset: URL, BLEU: 24.7, chr-F: 0.435\ntestset: URL, BLEU: 12.7, chr-F: 0.116\ntestset: URL, BLEU: 26.3, chr-F: 0.341\ntestset: URL, BLEU: 41.9, chr-F: 0.586\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 7.4, chr-F: 0.263\ntestset: URL, BLEU: 7.0, chr-F: 0.157\ntestset: URL, BLEU: 4.0, chr-F: 0.112\ntestset: URL, BLEU: 1.0, chr-F: 0.135\ntestset: URL, BLEU: 12.4, chr-F: 0.207\ntestset: URL, BLEU: 10.6, chr-F: 0.227\ntestset: URL, BLEU: 1.0, chr-F: 0.058\ntestset: URL, BLEU: 12.7, chr-F: 0.333\ntestset: URL, BLEU: 30.8, chr-F: 0.555\ntestset: URL, BLEU: 31.2, chr-F: 0.506\ntestset: URL, BLEU: 0.0, chr-F: 0.175\ntestset: URL, BLEU: 1.6, chr-F: 0.091\ntestset: URL, BLEU: 1.1, chr-F: 0.254\ntestset: URL, BLEU: 30.4, chr-F: 0.526\ntestset: URL, BLEU: 12.4, chr-F: 0.116\ntestset: URL, BLEU: 43.4, chr-F: 0.637\ntestset: URL, BLEU: 47.1, chr-F: 0.607\ntestset: URL, BLEU: 0.6, chr-F: 0.181\ntestset: URL, BLEU: 30.2, chr-F: 0.587\ntestset: URL, BLEU: 3.1, chr-F: 0.173\ntestset: URL, BLEU: 1.8, chr-F: 0.215\ntestset: URL, BLEU: 0.0, chr-F: 0.045\ntestset: URL, BLEU: 4.1, chr-F: 0.236\ntestset: URL, BLEU: 19.6, chr-F: 0.406\ntestset: URL, BLEU: 15.1, chr-F: 0.329\ntestset: URL, BLEU: 5.8, chr-F: 0.271\ntestset: URL, BLEU: 19.0, chr-F: 0.136\ntestset: URL, BLEU: 1.3, chr-F: 0.119\ntestset: URL, BLEU: 17.1, chr-F: 0.388\ntestset: URL, BLEU: 16.8, chr-F: 0.356\ntestset: URL, BLEU: 3.6, chr-F: 0.174\ntestset: URL, BLEU: 4.7, chr-F: 0.225\ntestset: URL, BLEU: 16.3, chr-F: 0.406\ntestset: URL, BLEU: 0.7, chr-F: 0.154\ntestset: URL, BLEU: 8.6, chr-F: 0.319\ntestset: URL, BLEU: 4.4, chr-F: 0.165\ntestset: URL, BLEU: 0.2, chr-F: 0.041\ntestset: URL, BLEU: 0.2, chr-F: 0.068\ntestset: URL, BLEU: 0.6, chr-F: 0.000\ntestset: URL, BLEU: 15.9, chr-F: 0.373\ntestset: URL, BLEU: 14.7, chr-F: 0.320\ntestset: URL, BLEU: 38.0, chr-F: 0.641\ntestset: URL, BLEU: 0.0, chr-F: 0.037\ntestset: URL, BLEU: 67.7, chr-F: 0.836\ntestset: URL, BLEU: 42.6, chr-F: 0.614\ntestset: URL, BLEU: 43.5, chr-F: 0.610\ntestset: URL, BLEU: 12.4, chr-F: 0.123\ntestset: URL, BLEU: 15.6, chr-F: 0.176\ntestset: URL, BLEU: 7.1, chr-F: 0.257\ntestset: URL, BLEU: 53.5, chr-F: 0.690\ntestset: URL, BLEU: 10.7, chr-F: 0.176\ntestset: URL, BLEU: 67.7, chr-F: 0.818\ntestset: URL, BLEU: 11.8, chr-F: 0.393\ntestset: URL, BLEU: 4.0, chr-F: 0.239\ntestset: URL, BLEU: 9.5, chr-F: 0.085\ntestset: URL, BLEU: 36.5, chr-F: 0.529\ntestset: URL, BLEU: 0.0, chr-F: 0.043\ntestset: URL, BLEU: 80.6, chr-F: 0.722\ntestset: URL, BLEU: 40.1, chr-F: 0.581\ntestset: URL, BLEU: 36.1, chr-F: 0.511\ntestset: URL, BLEU: 16.5, chr-F: 0.524\ntestset: URL, BLEU: 0.7, chr-F: 0.118\ntestset: URL, BLEU: 40.4, chr-F: 0.535\ntestset: URL, BLEU: 19.1, chr-F: 0.582\ntestset: URL, BLEU: 2.4, chr-F: 0.093\ntestset: URL, BLEU: 25.9, chr-F: 0.430\ntestset: URL, BLEU: 1.5, chr-F: 0.160\ntestset: URL, BLEU: 42.7, chr-F: 0.614\ntestset: URL, BLEU: 23.0, chr-F: 0.465\ntestset: URL, BLEU: 39.8, chr-F: 0.610\ntestset: URL, BLEU: 32.0, chr-F: 0.520\ntestset: URL, BLEU: 3.9, chr-F: 0.156\ntestset: URL, BLEU: 10.7, chr-F: 0.127\ntestset: URL, BLEU: 10.7, chr-F: 0.231\ntestset: URL, BLEU: 0.8, chr-F: 0.157\ntestset: URL, BLEU: 44.1, chr-F: 0.634\ntestset: URL, BLEU: 47.1, chr-F: 0.665\ntestset: URL, BLEU: 0.5, chr-F: 0.166\ntestset: URL, BLEU: 12.7, chr-F: 0.337\ntestset: URL, BLEU: 58.4, chr-F: 0.748\ntestset: URL, BLEU: 61.3, chr-F: 0.753\ntestset: URL, BLEU: 48.2, chr-F: 0.670\ntestset: URL, BLEU: 52.8, chr-F: 0.690\ntestset: URL, BLEU: 5.7, chr-F: 0.178\ntestset: URL, BLEU: 0.9, chr-F: 0.159\ntestset: URL, BLEU: 23.0, chr-F: 0.467\ntestset: URL, BLEU: 1.0, chr-F: 0.165\ntestset: URL, BLEU: 14.4, chr-F: 0.310\ntestset: URL, BLEU: 24.1, chr-F: 0.485\ntestset: URL, BLEU: 53.6, chr-F: 0.705\ntestset: URL, BLEU: 15.0, chr-F: 0.415\ntestset: URL, BLEU: 0.5, chr-F: 0.183\ntestset: URL, BLEU: 73.6, chr-F: 0.842\ntestset: URL, BLEU: 4.2, chr-F: 0.191\ntestset: URL, BLEU: 9.4, chr-F: 0.299\ntestset: URL, BLEU: 27.7, chr-F: 0.501\ntestset: URL, BLEU: 48.2, chr-F: 0.687\ntestset: URL, BLEU: 69.5, chr-F: 0.820\ntestset: URL, BLEU: 41.1, chr-F: 0.634\ntestset: URL, BLEU: 49.4, chr-F: 0.660\ntestset: URL, BLEU: 6.8, chr-F: 0.230\ntestset: URL, BLEU: 6.9, chr-F: 0.395\ntestset: URL, BLEU: 9.2, chr-F: 0.323\ntestset: URL, BLEU: 1.5, chr-F: 0.000\ntestset: URL, BLEU: 34.5, chr-F: 0.555\ntestset: URL, BLEU: 22.1, chr-F: 0.447\ntestset: URL, BLEU: 34.3, chr-F: 0.565\ntestset: URL, BLEU: 50.5, chr-F: 0.676\ntestset: URL, BLEU: 57.6, chr-F: 0.764\ntestset: URL, BLEU: 68.9, chr-F: 0.813\ntestset: URL, BLEU: 65.0, chr-F: 0.627\ntestset: URL, BLEU: 43.5, chr-F: 0.559\ntestset: URL, BLEU: 26.1, chr-F: 0.471\ntestset: URL, BLEU: 7.1, chr-F: 0.295\ntestset: URL, BLEU: 34.4, chr-F: 0.551\ntestset: URL, BLEU: 9.9, chr-F: 0.438\ntestset: URL, BLEU: 8.6, chr-F: 0.385\ntestset: URL, BLEU: 21.8, chr-F: 0.431\ntestset: URL, BLEU: 2.1, chr-F: 0.111\ntestset: URL, BLEU: 7.6, chr-F: 0.267\ntestset: URL, BLEU: 0.7, chr-F: 0.198\ntestset: URL, BLEU: 16.0, chr-F: 0.121\ntestset: URL, BLEU: 3.8, chr-F: 0.150\ntestset: URL, BLEU: 14.6, chr-F: 0.375\ntestset: URL, BLEU: 2.4, chr-F: 0.096\ntestset: URL, BLEU: 51.8, chr-F: 0.802\ntestset: URL, BLEU: 64.9, chr-F: 0.784\ntestset: URL, BLEU: 47.0, chr-F: 0.657\ntestset: URL, BLEU: 55.8, chr-F: 0.700\ntestset: URL, BLEU: 0.0, chr-F: 0.060\ntestset: URL, BLEU: 14.1, chr-F: 0.449\ntestset: URL, BLEU: 7.5, chr-F: 0.291\ntestset: URL, BLEU: 70.7, chr-F: 0.812\ntestset: URL, BLEU: 15.9, chr-F: 0.553\ntestset: URL, BLEU: 78.7, chr-F: 0.854\ntestset: URL, BLEU: 67.1, chr-F: 0.799\ntestset: URL, BLEU: 14.7, chr-F: 0.156\ntestset: URL, BLEU: 7.7, chr-F: 0.341\ntestset: URL, BLEU: 8.0, chr-F: 0.334\ntestset: URL, BLEU: 12.4, chr-F: 0.305\ntestset: URL, BLEU: 1.1, chr-F: 0.209\ntestset: URL, BLEU: 4.9, chr-F: 0.244\ntestset: URL, BLEU: 3.4, chr-F: 0.194\ntestset: URL, BLEU: 23.6, chr-F: 0.552\ntestset: URL, BLEU: 0.1, chr-F: 0.066\ntestset: URL, BLEU: 17.5, chr-F: 0.392\ntestset: URL, BLEU: 21.0, chr-F: 0.423\ntestset: URL, BLEU: 17.4, chr-F: 0.368\ntestset: URL, BLEU: 0.6, chr-F: 0.143\ntestset: URL, BLEU: 5.3, chr-F: 0.169\ntestset: URL, BLEU: 1.2, chr-F: 0.149\ntestset: URL, BLEU: 3.5, chr-F: 0.256\ntestset: URL, BLEU: 14.4, chr-F: 0.487\ntestset: URL, BLEU: 26.1, chr-F: 0.423\ntestset: URL, BLEU: 47.1, chr-F: 0.583\ntestset: URL, BLEU: 1.5, chr-F: 0.092\ntestset: URL, BLEU: 35.9, chr-F: 0.518\ntestset: URL, BLEU: 1.0, chr-F: 0.124### System Info:\n\n\n* hf\\_name: gem-gem\n* source\\_languages: gem\n* target\\_languages: gem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'sv', 'af', 'nn', 'fy', 'fo', 'de', 'nb', 'nl', 'is', 'en', 'lb', 'yi', 'gem']\n* src\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* tgt\\_constituents: {'ksh', 'enm\\_Latn', 'got\\_Goth', 'stq', 'dan', 'swe', 'afr', 'pdc', 'gos', 'nno', 'fry', 'gsw', 'fao', 'deu', 'swg', 'sco', 'nob', 'nld', 'isl', 'eng', 'ltz', 'nob\\_Hebr', 'ang\\_Latn', 'frr', 'non\\_Latn', 'yid', 'nds'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gem\n* tgt\\_alpha3: gem\n* short\\_pair: gem-gem\n* chrF2\\_score: 0.614\n* bleu: 42.7\n* brevity\\_penalty: 0.993\n* ref\\_len: 73459.0\n* src\\_name: Germanic languages\n* tgt\\_name: Germanic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: gem\n* tgt\\_alpha2: gem\n* prefer\\_old: False\n* long\\_pair: gem-gem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.