pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
translation | transformers |
### opus-mt-gil-en
* source languages: gil
* target languages: en
* OPUS readme: [gil-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.en | 36.0 | 0.522 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gil-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gil",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gil #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gil-en
* source languages: gil
* target languages: en
* OPUS readme: gil-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.0, chr-F: 0.522
| [
"### opus-mt-gil-en\n\n\n* source languages: gil\n* target languages: en\n* OPUS readme: gil-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.0, chr-F: 0.522"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gil-en\n\n\n* source languages: gil\n* target languages: en\n* OPUS readme: gil-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.0, chr-F: 0.522"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gil-en\n\n\n* source languages: gil\n* target languages: en\n* OPUS readme: gil-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.0, chr-F: 0.522"
] |
translation | transformers |
### opus-mt-gil-es
* source languages: gil
* target languages: es
* OPUS readme: [gil-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.es | 21.8 | 0.398 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gil-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gil",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gil #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gil-es
* source languages: gil
* target languages: es
* OPUS readme: gil-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.8, chr-F: 0.398
| [
"### opus-mt-gil-es\n\n\n* source languages: gil\n* target languages: es\n* OPUS readme: gil-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.398"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gil-es\n\n\n* source languages: gil\n* target languages: es\n* OPUS readme: gil-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.398"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gil-es\n\n\n* source languages: gil\n* target languages: es\n* OPUS readme: gil-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.398"
] |
translation | transformers |
### opus-mt-gil-fi
* source languages: gil
* target languages: fi
* OPUS readme: [gil-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.fi | 23.1 | 0.447 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gil-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gil",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gil #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gil-fi
* source languages: gil
* target languages: fi
* OPUS readme: gil-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.1, chr-F: 0.447
| [
"### opus-mt-gil-fi\n\n\n* source languages: gil\n* target languages: fi\n* OPUS readme: gil-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.447"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gil-fi\n\n\n* source languages: gil\n* target languages: fi\n* OPUS readme: gil-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.447"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gil-fi\n\n\n* source languages: gil\n* target languages: fi\n* OPUS readme: gil-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.447"
] |
translation | transformers |
### opus-mt-gil-fr
* source languages: gil
* target languages: fr
* OPUS readme: [gil-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.fr | 24.9 | 0.424 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gil-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gil",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gil #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gil-fr
* source languages: gil
* target languages: fr
* OPUS readme: gil-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.9, chr-F: 0.424
| [
"### opus-mt-gil-fr\n\n\n* source languages: gil\n* target languages: fr\n* OPUS readme: gil-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.424"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gil-fr\n\n\n* source languages: gil\n* target languages: fr\n* OPUS readme: gil-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.424"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gil-fr\n\n\n* source languages: gil\n* target languages: fr\n* OPUS readme: gil-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.424"
] |
translation | transformers |
### opus-mt-gil-sv
* source languages: gil
* target languages: sv
* OPUS readme: [gil-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gil-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gil-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gil-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.gil.sv | 25.8 | 0.441 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gil-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gil",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gil #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gil-sv
* source languages: gil
* target languages: sv
* OPUS readme: gil-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.8, chr-F: 0.441
| [
"### opus-mt-gil-sv\n\n\n* source languages: gil\n* target languages: sv\n* OPUS readme: gil-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.8, chr-F: 0.441"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gil-sv\n\n\n* source languages: gil\n* target languages: sv\n* OPUS readme: gil-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.8, chr-F: 0.441"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gil #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gil-sv\n\n\n* source languages: gil\n* target languages: sv\n* OPUS readme: gil-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.8, chr-F: 0.441"
] |
translation | transformers |
### glg-eng
* source group: Galician
* target group: English
* OPUS readme: [glg-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.eng | 44.4 | 0.628 |
### System Info:
- hf_name: glg-eng
- source_languages: glg
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'en']
- src_constituents: {'glg'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-eng/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: eng
- short_pair: gl-en
- chrF2_score: 0.628
- bleu: 44.4
- brevity_penalty: 0.975
- ref_len: 8365.0
- src_name: Galician
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: en
- prefer_old: False
- long_pair: glg-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["gl", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gl-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gl",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"gl",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### glg-eng
* source group: Galician
* target group: English
* OPUS readme: glg-eng
* model: transformer-align
* source language(s): glg
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 44.4, chr-F: 0.628
### System Info:
* hf\_name: glg-eng
* source\_languages: glg
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['gl', 'en']
* src\_constituents: {'glg'}
* tgt\_constituents: {'eng'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: glg
* tgt\_alpha3: eng
* short\_pair: gl-en
* chrF2\_score: 0.628
* bleu: 44.4
* brevity\_penalty: 0.975
* ref\_len: 8365.0
* src\_name: Galician
* tgt\_name: English
* train\_date: 2020-06-16
* src\_alpha2: gl
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: glg-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### glg-eng\n\n\n* source group: Galician\n* target group: English\n* OPUS readme: glg-eng\n* model: transformer-align\n* source language(s): glg\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.4, chr-F: 0.628",
"### System Info:\n\n\n* hf\\_name: glg-eng\n* source\\_languages: glg\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['gl', 'en']\n* src\\_constituents: {'glg'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: glg\n* tgt\\_alpha3: eng\n* short\\_pair: gl-en\n* chrF2\\_score: 0.628\n* bleu: 44.4\n* brevity\\_penalty: 0.975\n* ref\\_len: 8365.0\n* src\\_name: Galician\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: gl\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: glg-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### glg-eng\n\n\n* source group: Galician\n* target group: English\n* OPUS readme: glg-eng\n* model: transformer-align\n* source language(s): glg\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.4, chr-F: 0.628",
"### System Info:\n\n\n* hf\\_name: glg-eng\n* source\\_languages: glg\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['gl', 'en']\n* src\\_constituents: {'glg'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: glg\n* tgt\\_alpha3: eng\n* short\\_pair: gl-en\n* chrF2\\_score: 0.628\n* bleu: 44.4\n* brevity\\_penalty: 0.975\n* ref\\_len: 8365.0\n* src\\_name: Galician\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: gl\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: glg-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
137,
404
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### glg-eng\n\n\n* source group: Galician\n* target group: English\n* OPUS readme: glg-eng\n* model: transformer-align\n* source language(s): glg\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.4, chr-F: 0.628### System Info:\n\n\n* hf\\_name: glg-eng\n* source\\_languages: glg\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['gl', 'en']\n* src\\_constituents: {'glg'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: glg\n* tgt\\_alpha3: eng\n* short\\_pair: gl-en\n* chrF2\\_score: 0.628\n* bleu: 44.4\n* brevity\\_penalty: 0.975\n* ref\\_len: 8365.0\n* src\\_name: Galician\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: gl\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: glg-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### glg-spa
* source group: Galician
* target group: Spanish
* OPUS readme: [glg-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-spa/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.spa | 72.2 | 0.836 |
### System Info:
- hf_name: glg-spa
- source_languages: glg
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'es']
- src_constituents: {'glg'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: spa
- short_pair: gl-es
- chrF2_score: 0.836
- bleu: 72.2
- brevity_penalty: 0.982
- ref_len: 17443.0
- src_name: Galician
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: es
- prefer_old: False
- long_pair: glg-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["gl", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gl-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gl",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"gl",
"es"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### glg-spa
* source group: Galician
* target group: Spanish
* OPUS readme: glg-spa
* model: transformer-align
* source language(s): glg
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 72.2, chr-F: 0.836
### System Info:
* hf\_name: glg-spa
* source\_languages: glg
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['gl', 'es']
* src\_constituents: {'glg'}
* tgt\_constituents: {'spa'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: glg
* tgt\_alpha3: spa
* short\_pair: gl-es
* chrF2\_score: 0.836
* bleu: 72.2
* brevity\_penalty: 0.982
* ref\_len: 17443.0
* src\_name: Galician
* tgt\_name: Spanish
* train\_date: 2020-06-16
* src\_alpha2: gl
* tgt\_alpha2: es
* prefer\_old: False
* long\_pair: glg-spa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### glg-spa\n\n\n* source group: Galician\n* target group: Spanish\n* OPUS readme: glg-spa\n* model: transformer-align\n* source language(s): glg\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 72.2, chr-F: 0.836",
"### System Info:\n\n\n* hf\\_name: glg-spa\n* source\\_languages: glg\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['gl', 'es']\n* src\\_constituents: {'glg'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: glg\n* tgt\\_alpha3: spa\n* short\\_pair: gl-es\n* chrF2\\_score: 0.836\n* bleu: 72.2\n* brevity\\_penalty: 0.982\n* ref\\_len: 17443.0\n* src\\_name: Galician\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: gl\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: glg-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### glg-spa\n\n\n* source group: Galician\n* target group: Spanish\n* OPUS readme: glg-spa\n* model: transformer-align\n* source language(s): glg\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 72.2, chr-F: 0.836",
"### System Info:\n\n\n* hf\\_name: glg-spa\n* source\\_languages: glg\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['gl', 'es']\n* src\\_constituents: {'glg'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: glg\n* tgt\\_alpha3: spa\n* short\\_pair: gl-es\n* chrF2\\_score: 0.836\n* bleu: 72.2\n* brevity\\_penalty: 0.982\n* ref\\_len: 17443.0\n* src\\_name: Galician\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: gl\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: glg-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
137,
404
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### glg-spa\n\n\n* source group: Galician\n* target group: Spanish\n* OPUS readme: glg-spa\n* model: transformer-align\n* source language(s): glg\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 72.2, chr-F: 0.836### System Info:\n\n\n* hf\\_name: glg-spa\n* source\\_languages: glg\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['gl', 'es']\n* src\\_constituents: {'glg'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: glg\n* tgt\\_alpha3: spa\n* short\\_pair: gl-es\n* chrF2\\_score: 0.836\n* bleu: 72.2\n* brevity\\_penalty: 0.982\n* ref\\_len: 17443.0\n* src\\_name: Galician\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-16\n* src\\_alpha2: gl\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: glg-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### glg-por
* source group: Galician
* target group: Portuguese
* OPUS readme: [glg-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-por/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.por | 57.9 | 0.758 |
### System Info:
- hf_name: glg-por
- source_languages: glg
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'pt']
- src_constituents: {'glg'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-por/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: por
- short_pair: gl-pt
- chrF2_score: 0.758
- bleu: 57.9
- brevity_penalty: 0.977
- ref_len: 3078.0
- src_name: Galician
- tgt_name: Portuguese
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: pt
- prefer_old: False
- long_pair: glg-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["gl", "pt"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gl-pt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gl",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"gl",
"pt"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gl #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### glg-por
* source group: Galician
* target group: Portuguese
* OPUS readme: glg-por
* model: transformer-align
* source language(s): glg
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 57.9, chr-F: 0.758
### System Info:
* hf\_name: glg-por
* source\_languages: glg
* target\_languages: por
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['gl', 'pt']
* src\_constituents: {'glg'}
* tgt\_constituents: {'por'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: glg
* tgt\_alpha3: por
* short\_pair: gl-pt
* chrF2\_score: 0.758
* bleu: 57.9
* brevity\_penalty: 0.977
* ref\_len: 3078.0
* src\_name: Galician
* tgt\_name: Portuguese
* train\_date: 2020-06-16
* src\_alpha2: gl
* tgt\_alpha2: pt
* prefer\_old: False
* long\_pair: glg-por
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### glg-por\n\n\n* source group: Galician\n* target group: Portuguese\n* OPUS readme: glg-por\n* model: transformer-align\n* source language(s): glg\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.9, chr-F: 0.758",
"### System Info:\n\n\n* hf\\_name: glg-por\n* source\\_languages: glg\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['gl', 'pt']\n* src\\_constituents: {'glg'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: glg\n* tgt\\_alpha3: por\n* short\\_pair: gl-pt\n* chrF2\\_score: 0.758\n* bleu: 57.9\n* brevity\\_penalty: 0.977\n* ref\\_len: 3078.0\n* src\\_name: Galician\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-16\n* src\\_alpha2: gl\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: glg-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gl #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### glg-por\n\n\n* source group: Galician\n* target group: Portuguese\n* OPUS readme: glg-por\n* model: transformer-align\n* source language(s): glg\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.9, chr-F: 0.758",
"### System Info:\n\n\n* hf\\_name: glg-por\n* source\\_languages: glg\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['gl', 'pt']\n* src\\_constituents: {'glg'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: glg\n* tgt\\_alpha3: por\n* short\\_pair: gl-pt\n* chrF2\\_score: 0.758\n* bleu: 57.9\n* brevity\\_penalty: 0.977\n* ref\\_len: 3078.0\n* src\\_name: Galician\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-16\n* src\\_alpha2: gl\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: glg-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
137,
404
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gl #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### glg-por\n\n\n* source group: Galician\n* target group: Portuguese\n* OPUS readme: glg-por\n* model: transformer-align\n* source language(s): glg\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.9, chr-F: 0.758### System Info:\n\n\n* hf\\_name: glg-por\n* source\\_languages: glg\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['gl', 'pt']\n* src\\_constituents: {'glg'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: glg\n* tgt\\_alpha3: por\n* short\\_pair: gl-pt\n* chrF2\\_score: 0.758\n* bleu: 57.9\n* brevity\\_penalty: 0.977\n* ref\\_len: 3078.0\n* src\\_name: Galician\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-16\n* src\\_alpha2: gl\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: glg-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### gmq-eng
* source group: North Germanic languages
* target group: English
* OPUS readme: [gmq-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md)
* model: transformer
* source language(s): dan fao isl nno nob nob_Hebr non_Latn swe
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.zip)
* test set translations: [opus2m-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.test.txt)
* test set scores: [opus2m-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.multi.eng | 58.1 | 0.720 |
### System Info:
- hf_name: gmq-eng
- source_languages: gmq
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq', 'en']
- src_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-eng/opus2m-2020-07-26.test.txt
- src_alpha3: gmq
- tgt_alpha3: eng
- short_pair: gmq-en
- chrF2_score: 0.72
- bleu: 58.1
- brevity_penalty: 0.982
- ref_len: 72641.0
- src_name: North Germanic languages
- tgt_name: English
- train_date: 2020-07-26
- src_alpha2: gmq
- tgt_alpha2: en
- prefer_old: False
- long_pair: gmq-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["da", "nb", "sv", "is", "nn", "fo", "gmq", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gmq-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #da #nb #sv #is #nn #fo #gmq #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### gmq-eng
* source group: North Germanic languages
* target group: English
* OPUS readme: gmq-eng
* model: transformer
* source language(s): dan fao isl nno nob nob\_Hebr non\_Latn swe
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 58.1, chr-F: 0.720
### System Info:
* hf\_name: gmq-eng
* source\_languages: gmq
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq', 'en']
* src\_constituents: {'dan', 'nob', 'nob\_Hebr', 'swe', 'isl', 'nno', 'non\_Latn', 'fao'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: gmq
* tgt\_alpha3: eng
* short\_pair: gmq-en
* chrF2\_score: 0.72
* bleu: 58.1
* brevity\_penalty: 0.982
* ref\_len: 72641.0
* src\_name: North Germanic languages
* tgt\_name: English
* train\_date: 2020-07-26
* src\_alpha2: gmq
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: gmq-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### gmq-eng\n\n\n* source group: North Germanic languages\n* target group: English\n* OPUS readme: gmq-eng\n* model: transformer\n* source language(s): dan fao isl nno nob nob\\_Hebr non\\_Latn swe\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.1, chr-F: 0.720",
"### System Info:\n\n\n* hf\\_name: gmq-eng\n* source\\_languages: gmq\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq', 'en']\n* src\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmq\n* tgt\\_alpha3: eng\n* short\\_pair: gmq-en\n* chrF2\\_score: 0.72\n* bleu: 58.1\n* brevity\\_penalty: 0.982\n* ref\\_len: 72641.0\n* src\\_name: North Germanic languages\n* tgt\\_name: English\n* train\\_date: 2020-07-26\n* src\\_alpha2: gmq\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gmq-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #da #nb #sv #is #nn #fo #gmq #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### gmq-eng\n\n\n* source group: North Germanic languages\n* target group: English\n* OPUS readme: gmq-eng\n* model: transformer\n* source language(s): dan fao isl nno nob nob\\_Hebr non\\_Latn swe\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.1, chr-F: 0.720",
"### System Info:\n\n\n* hf\\_name: gmq-eng\n* source\\_languages: gmq\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq', 'en']\n* src\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmq\n* tgt\\_alpha3: eng\n* short\\_pair: gmq-en\n* chrF2\\_score: 0.72\n* bleu: 58.1\n* brevity\\_penalty: 0.982\n* ref\\_len: 72641.0\n* src\\_name: North Germanic languages\n* tgt\\_name: English\n* train\\_date: 2020-07-26\n* src\\_alpha2: gmq\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gmq-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
67,
152,
470
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #da #nb #sv #is #nn #fo #gmq #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### gmq-eng\n\n\n* source group: North Germanic languages\n* target group: English\n* OPUS readme: gmq-eng\n* model: transformer\n* source language(s): dan fao isl nno nob nob\\_Hebr non\\_Latn swe\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.1, chr-F: 0.720### System Info:\n\n\n* hf\\_name: gmq-eng\n* source\\_languages: gmq\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq', 'en']\n* src\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmq\n* tgt\\_alpha3: eng\n* short\\_pair: gmq-en\n* chrF2\\_score: 0.72\n* bleu: 58.1\n* brevity\\_penalty: 0.982\n* ref\\_len: 72641.0\n* src\\_name: North Germanic languages\n* tgt\\_name: English\n* train\\_date: 2020-07-26\n* src\\_alpha2: gmq\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gmq-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### gmq-gmq
* source group: North Germanic languages
* target group: North Germanic languages
* OPUS readme: [gmq-gmq](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-gmq/README.md)
* model: transformer
* source language(s): dan fao isl nno nob swe
* target language(s): dan fao isl nno nob swe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.dan-fao.dan.fao | 8.1 | 0.173 |
| Tatoeba-test.dan-isl.dan.isl | 52.5 | 0.827 |
| Tatoeba-test.dan-nor.dan.nor | 62.8 | 0.772 |
| Tatoeba-test.dan-swe.dan.swe | 67.6 | 0.802 |
| Tatoeba-test.fao-dan.fao.dan | 11.3 | 0.306 |
| Tatoeba-test.fao-isl.fao.isl | 26.3 | 0.359 |
| Tatoeba-test.fao-nor.fao.nor | 36.8 | 0.531 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 0.632 |
| Tatoeba-test.isl-dan.isl.dan | 67.0 | 0.739 |
| Tatoeba-test.isl-fao.isl.fao | 14.5 | 0.243 |
| Tatoeba-test.isl-nor.isl.nor | 51.8 | 0.674 |
| Tatoeba-test.isl-swe.isl.swe | 100.0 | 1.000 |
| Tatoeba-test.multi.multi | 64.7 | 0.782 |
| Tatoeba-test.nor-dan.nor.dan | 65.6 | 0.797 |
| Tatoeba-test.nor-fao.nor.fao | 9.4 | 0.362 |
| Tatoeba-test.nor-isl.nor.isl | 38.8 | 0.587 |
| Tatoeba-test.nor-nor.nor.nor | 51.9 | 0.721 |
| Tatoeba-test.nor-swe.nor.swe | 66.5 | 0.789 |
| Tatoeba-test.swe-dan.swe.dan | 67.6 | 0.802 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.268 |
| Tatoeba-test.swe-isl.swe.isl | 65.8 | 0.914 |
| Tatoeba-test.swe-nor.swe.nor | 60.6 | 0.755 |
### System Info:
- hf_name: gmq-gmq
- source_languages: gmq
- target_languages: gmq
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmq-gmq/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']
- src_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- tgt_constituents: {'dan', 'nob', 'nob_Hebr', 'swe', 'isl', 'nno', 'non_Latn', 'fao'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmq-gmq/opus-2020-07-27.test.txt
- src_alpha3: gmq
- tgt_alpha3: gmq
- short_pair: gmq-gmq
- chrF2_score: 0.782
- bleu: 64.7
- brevity_penalty: 0.9940000000000001
- ref_len: 49385.0
- src_name: North Germanic languages
- tgt_name: North Germanic languages
- train_date: 2020-07-27
- src_alpha2: gmq
- tgt_alpha2: gmq
- prefer_old: False
- long_pair: gmq-gmq
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["da", "nb", "sv", "is", "nn", "fo", "gmq"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gmq-gmq | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"da",
"nb",
"sv",
"is",
"nn",
"fo",
"gmq"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #da #nb #sv #is #nn #fo #gmq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### gmq-gmq
* source group: North Germanic languages
* target group: North Germanic languages
* OPUS readme: gmq-gmq
* model: transformer
* source language(s): dan fao isl nno nob swe
* target language(s): dan fao isl nno nob swe
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 8.1, chr-F: 0.173
testset: URL, BLEU: 52.5, chr-F: 0.827
testset: URL, BLEU: 62.8, chr-F: 0.772
testset: URL, BLEU: 67.6, chr-F: 0.802
testset: URL, BLEU: 11.3, chr-F: 0.306
testset: URL, BLEU: 26.3, chr-F: 0.359
testset: URL, BLEU: 36.8, chr-F: 0.531
testset: URL, BLEU: 0.0, chr-F: 0.632
testset: URL, BLEU: 67.0, chr-F: 0.739
testset: URL, BLEU: 14.5, chr-F: 0.243
testset: URL, BLEU: 51.8, chr-F: 0.674
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 64.7, chr-F: 0.782
testset: URL, BLEU: 65.6, chr-F: 0.797
testset: URL, BLEU: 9.4, chr-F: 0.362
testset: URL, BLEU: 38.8, chr-F: 0.587
testset: URL, BLEU: 51.9, chr-F: 0.721
testset: URL, BLEU: 66.5, chr-F: 0.789
testset: URL, BLEU: 67.6, chr-F: 0.802
testset: URL, BLEU: 0.0, chr-F: 0.268
testset: URL, BLEU: 65.8, chr-F: 0.914
testset: URL, BLEU: 60.6, chr-F: 0.755
### System Info:
* hf\_name: gmq-gmq
* source\_languages: gmq
* target\_languages: gmq
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']
* src\_constituents: {'dan', 'nob', 'nob\_Hebr', 'swe', 'isl', 'nno', 'non\_Latn', 'fao'}
* tgt\_constituents: {'dan', 'nob', 'nob\_Hebr', 'swe', 'isl', 'nno', 'non\_Latn', 'fao'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: gmq
* tgt\_alpha3: gmq
* short\_pair: gmq-gmq
* chrF2\_score: 0.782
* bleu: 64.7
* brevity\_penalty: 0.9940000000000001
* ref\_len: 49385.0
* src\_name: North Germanic languages
* tgt\_name: North Germanic languages
* train\_date: 2020-07-27
* src\_alpha2: gmq
* tgt\_alpha2: gmq
* prefer\_old: False
* long\_pair: gmq-gmq
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### gmq-gmq\n\n\n* source group: North Germanic languages\n* target group: North Germanic languages\n* OPUS readme: gmq-gmq\n* model: transformer\n* source language(s): dan fao isl nno nob swe\n* target language(s): dan fao isl nno nob swe\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.1, chr-F: 0.173\ntestset: URL, BLEU: 52.5, chr-F: 0.827\ntestset: URL, BLEU: 62.8, chr-F: 0.772\ntestset: URL, BLEU: 67.6, chr-F: 0.802\ntestset: URL, BLEU: 11.3, chr-F: 0.306\ntestset: URL, BLEU: 26.3, chr-F: 0.359\ntestset: URL, BLEU: 36.8, chr-F: 0.531\ntestset: URL, BLEU: 0.0, chr-F: 0.632\ntestset: URL, BLEU: 67.0, chr-F: 0.739\ntestset: URL, BLEU: 14.5, chr-F: 0.243\ntestset: URL, BLEU: 51.8, chr-F: 0.674\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 64.7, chr-F: 0.782\ntestset: URL, BLEU: 65.6, chr-F: 0.797\ntestset: URL, BLEU: 9.4, chr-F: 0.362\ntestset: URL, BLEU: 38.8, chr-F: 0.587\ntestset: URL, BLEU: 51.9, chr-F: 0.721\ntestset: URL, BLEU: 66.5, chr-F: 0.789\ntestset: URL, BLEU: 67.6, chr-F: 0.802\ntestset: URL, BLEU: 0.0, chr-F: 0.268\ntestset: URL, BLEU: 65.8, chr-F: 0.914\ntestset: URL, BLEU: 60.6, chr-F: 0.755",
"### System Info:\n\n\n* hf\\_name: gmq-gmq\n* source\\_languages: gmq\n* target\\_languages: gmq\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']\n* src\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* tgt\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmq\n* tgt\\_alpha3: gmq\n* short\\_pair: gmq-gmq\n* chrF2\\_score: 0.782\n* bleu: 64.7\n* brevity\\_penalty: 0.9940000000000001\n* ref\\_len: 49385.0\n* src\\_name: North Germanic languages\n* tgt\\_name: North Germanic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: gmq\n* tgt\\_alpha2: gmq\n* prefer\\_old: False\n* long\\_pair: gmq-gmq\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #da #nb #sv #is #nn #fo #gmq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### gmq-gmq\n\n\n* source group: North Germanic languages\n* target group: North Germanic languages\n* OPUS readme: gmq-gmq\n* model: transformer\n* source language(s): dan fao isl nno nob swe\n* target language(s): dan fao isl nno nob swe\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.1, chr-F: 0.173\ntestset: URL, BLEU: 52.5, chr-F: 0.827\ntestset: URL, BLEU: 62.8, chr-F: 0.772\ntestset: URL, BLEU: 67.6, chr-F: 0.802\ntestset: URL, BLEU: 11.3, chr-F: 0.306\ntestset: URL, BLEU: 26.3, chr-F: 0.359\ntestset: URL, BLEU: 36.8, chr-F: 0.531\ntestset: URL, BLEU: 0.0, chr-F: 0.632\ntestset: URL, BLEU: 67.0, chr-F: 0.739\ntestset: URL, BLEU: 14.5, chr-F: 0.243\ntestset: URL, BLEU: 51.8, chr-F: 0.674\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 64.7, chr-F: 0.782\ntestset: URL, BLEU: 65.6, chr-F: 0.797\ntestset: URL, BLEU: 9.4, chr-F: 0.362\ntestset: URL, BLEU: 38.8, chr-F: 0.587\ntestset: URL, BLEU: 51.9, chr-F: 0.721\ntestset: URL, BLEU: 66.5, chr-F: 0.789\ntestset: URL, BLEU: 67.6, chr-F: 0.802\ntestset: URL, BLEU: 0.0, chr-F: 0.268\ntestset: URL, BLEU: 65.8, chr-F: 0.914\ntestset: URL, BLEU: 60.6, chr-F: 0.755",
"### System Info:\n\n\n* hf\\_name: gmq-gmq\n* source\\_languages: gmq\n* target\\_languages: gmq\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']\n* src\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* tgt\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmq\n* tgt\\_alpha3: gmq\n* short\\_pair: gmq-gmq\n* chrF2\\_score: 0.782\n* bleu: 64.7\n* brevity\\_penalty: 0.9940000000000001\n* ref\\_len: 49385.0\n* src\\_name: North Germanic languages\n* tgt\\_name: North Germanic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: gmq\n* tgt\\_alpha2: gmq\n* prefer\\_old: False\n* long\\_pair: gmq-gmq\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
65,
658,
524
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #da #nb #sv #is #nn #fo #gmq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### gmq-gmq\n\n\n* source group: North Germanic languages\n* target group: North Germanic languages\n* OPUS readme: gmq-gmq\n* model: transformer\n* source language(s): dan fao isl nno nob swe\n* target language(s): dan fao isl nno nob swe\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.1, chr-F: 0.173\ntestset: URL, BLEU: 52.5, chr-F: 0.827\ntestset: URL, BLEU: 62.8, chr-F: 0.772\ntestset: URL, BLEU: 67.6, chr-F: 0.802\ntestset: URL, BLEU: 11.3, chr-F: 0.306\ntestset: URL, BLEU: 26.3, chr-F: 0.359\ntestset: URL, BLEU: 36.8, chr-F: 0.531\ntestset: URL, BLEU: 0.0, chr-F: 0.632\ntestset: URL, BLEU: 67.0, chr-F: 0.739\ntestset: URL, BLEU: 14.5, chr-F: 0.243\ntestset: URL, BLEU: 51.8, chr-F: 0.674\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 64.7, chr-F: 0.782\ntestset: URL, BLEU: 65.6, chr-F: 0.797\ntestset: URL, BLEU: 9.4, chr-F: 0.362\ntestset: URL, BLEU: 38.8, chr-F: 0.587\ntestset: URL, BLEU: 51.9, chr-F: 0.721\ntestset: URL, BLEU: 66.5, chr-F: 0.789\ntestset: URL, BLEU: 67.6, chr-F: 0.802\ntestset: URL, BLEU: 0.0, chr-F: 0.268\ntestset: URL, BLEU: 65.8, chr-F: 0.914\ntestset: URL, BLEU: 60.6, chr-F: 0.755### System Info:\n\n\n* hf\\_name: gmq-gmq\n* source\\_languages: gmq\n* target\\_languages: gmq\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['da', 'nb', 'sv', 'is', 'nn', 'fo', 'gmq']\n* src\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* tgt\\_constituents: {'dan', 'nob', 'nob\\_Hebr', 'swe', 'isl', 'nno', 'non\\_Latn', 'fao'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmq\n* tgt\\_alpha3: gmq\n* short\\_pair: gmq-gmq\n* chrF2\\_score: 0.782\n* bleu: 64.7\n* brevity\\_penalty: 0.9940000000000001\n* ref\\_len: 49385.0\n* src\\_name: North Germanic languages\n* tgt\\_name: North Germanic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: gmq\n* tgt\\_alpha2: gmq\n* prefer\\_old: False\n* long\\_pair: gmq-gmq\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### gmw-eng
* source group: West Germanic languages
* target group: English
* OPUS readme: [gmw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-eng/README.md)
* model: transformer
* source language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 27.2 | 0.538 |
| news-test2008-deueng.deu.eng | 25.7 | 0.534 |
| newstest2009-deueng.deu.eng | 25.1 | 0.530 |
| newstest2010-deueng.deu.eng | 27.9 | 0.565 |
| newstest2011-deueng.deu.eng | 25.3 | 0.539 |
| newstest2012-deueng.deu.eng | 26.6 | 0.548 |
| newstest2013-deueng.deu.eng | 29.6 | 0.565 |
| newstest2014-deen-deueng.deu.eng | 30.2 | 0.571 |
| newstest2015-ende-deueng.deu.eng | 31.5 | 0.577 |
| newstest2016-ende-deueng.deu.eng | 36.7 | 0.622 |
| newstest2017-ende-deueng.deu.eng | 32.3 | 0.585 |
| newstest2018-ende-deueng.deu.eng | 39.9 | 0.638 |
| newstest2019-deen-deueng.deu.eng | 35.9 | 0.611 |
| Tatoeba-test.afr-eng.afr.eng | 61.8 | 0.750 |
| Tatoeba-test.ang-eng.ang.eng | 7.3 | 0.220 |
| Tatoeba-test.deu-eng.deu.eng | 48.3 | 0.657 |
| Tatoeba-test.enm-eng.enm.eng | 16.1 | 0.423 |
| Tatoeba-test.frr-eng.frr.eng | 7.0 | 0.168 |
| Tatoeba-test.fry-eng.fry.eng | 28.6 | 0.488 |
| Tatoeba-test.gos-eng.gos.eng | 15.5 | 0.326 |
| Tatoeba-test.gsw-eng.gsw.eng | 12.7 | 0.308 |
| Tatoeba-test.ksh-eng.ksh.eng | 8.4 | 0.254 |
| Tatoeba-test.ltz-eng.ltz.eng | 28.7 | 0.453 |
| Tatoeba-test.multi.eng | 48.5 | 0.646 |
| Tatoeba-test.nds-eng.nds.eng | 31.4 | 0.509 |
| Tatoeba-test.nld-eng.nld.eng | 58.1 | 0.728 |
| Tatoeba-test.pdc-eng.pdc.eng | 25.1 | 0.406 |
| Tatoeba-test.sco-eng.sco.eng | 40.8 | 0.570 |
| Tatoeba-test.stq-eng.stq.eng | 20.3 | 0.380 |
| Tatoeba-test.swg-eng.swg.eng | 20.5 | 0.315 |
| Tatoeba-test.yid-eng.yid.eng | 16.0 | 0.366 |
### System Info:
- hf_name: gmw-eng
- source_languages: gmw
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opus2m-2020-08-01.test.txt
- src_alpha3: gmw
- tgt_alpha3: eng
- short_pair: gmw-en
- chrF2_score: 0.6459999999999999
- bleu: 48.5
- brevity_penalty: 0.997
- ref_len: 72584.0
- src_name: West Germanic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: gmw
- tgt_alpha2: en
- prefer_old: False
- long_pair: gmw-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["nl", "en", "lb", "af", "de", "fy", "yi", "gmw"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gmw-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"en",
"lb",
"af",
"de",
"fy",
"yi",
"gmw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl",
"en",
"lb",
"af",
"de",
"fy",
"yi",
"gmw"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #nl #en #lb #af #de #fy #yi #gmw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### gmw-eng
* source group: West Germanic languages
* target group: English
* OPUS readme: gmw-eng
* model: transformer
* source language(s): afr ang\_Latn deu enm\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.2, chr-F: 0.538
testset: URL, BLEU: 25.7, chr-F: 0.534
testset: URL, BLEU: 25.1, chr-F: 0.530
testset: URL, BLEU: 27.9, chr-F: 0.565
testset: URL, BLEU: 25.3, chr-F: 0.539
testset: URL, BLEU: 26.6, chr-F: 0.548
testset: URL, BLEU: 29.6, chr-F: 0.565
testset: URL, BLEU: 30.2, chr-F: 0.571
testset: URL, BLEU: 31.5, chr-F: 0.577
testset: URL, BLEU: 36.7, chr-F: 0.622
testset: URL, BLEU: 32.3, chr-F: 0.585
testset: URL, BLEU: 39.9, chr-F: 0.638
testset: URL, BLEU: 35.9, chr-F: 0.611
testset: URL, BLEU: 61.8, chr-F: 0.750
testset: URL, BLEU: 7.3, chr-F: 0.220
testset: URL, BLEU: 48.3, chr-F: 0.657
testset: URL, BLEU: 16.1, chr-F: 0.423
testset: URL, BLEU: 7.0, chr-F: 0.168
testset: URL, BLEU: 28.6, chr-F: 0.488
testset: URL, BLEU: 15.5, chr-F: 0.326
testset: URL, BLEU: 12.7, chr-F: 0.308
testset: URL, BLEU: 8.4, chr-F: 0.254
testset: URL, BLEU: 28.7, chr-F: 0.453
testset: URL, BLEU: 48.5, chr-F: 0.646
testset: URL, BLEU: 31.4, chr-F: 0.509
testset: URL, BLEU: 58.1, chr-F: 0.728
testset: URL, BLEU: 25.1, chr-F: 0.406
testset: URL, BLEU: 40.8, chr-F: 0.570
testset: URL, BLEU: 20.3, chr-F: 0.380
testset: URL, BLEU: 20.5, chr-F: 0.315
testset: URL, BLEU: 16.0, chr-F: 0.366
### System Info:
* hf\_name: gmw-eng
* source\_languages: gmw
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
* src\_constituents: {'ksh', 'nld', 'eng', 'enm\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: gmw
* tgt\_alpha3: eng
* short\_pair: gmw-en
* chrF2\_score: 0.6459999999999999
* bleu: 48.5
* brevity\_penalty: 0.997
* ref\_len: 72584.0
* src\_name: West Germanic languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: gmw
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: gmw-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### gmw-eng\n\n\n* source group: West Germanic languages\n* target group: English\n* OPUS readme: gmw-eng\n* model: transformer\n* source language(s): afr ang\\_Latn deu enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.538\ntestset: URL, BLEU: 25.7, chr-F: 0.534\ntestset: URL, BLEU: 25.1, chr-F: 0.530\ntestset: URL, BLEU: 27.9, chr-F: 0.565\ntestset: URL, BLEU: 25.3, chr-F: 0.539\ntestset: URL, BLEU: 26.6, chr-F: 0.548\ntestset: URL, BLEU: 29.6, chr-F: 0.565\ntestset: URL, BLEU: 30.2, chr-F: 0.571\ntestset: URL, BLEU: 31.5, chr-F: 0.577\ntestset: URL, BLEU: 36.7, chr-F: 0.622\ntestset: URL, BLEU: 32.3, chr-F: 0.585\ntestset: URL, BLEU: 39.9, chr-F: 0.638\ntestset: URL, BLEU: 35.9, chr-F: 0.611\ntestset: URL, BLEU: 61.8, chr-F: 0.750\ntestset: URL, BLEU: 7.3, chr-F: 0.220\ntestset: URL, BLEU: 48.3, chr-F: 0.657\ntestset: URL, BLEU: 16.1, chr-F: 0.423\ntestset: URL, BLEU: 7.0, chr-F: 0.168\ntestset: URL, BLEU: 28.6, chr-F: 0.488\ntestset: URL, BLEU: 15.5, chr-F: 0.326\ntestset: URL, BLEU: 12.7, chr-F: 0.308\ntestset: URL, BLEU: 8.4, chr-F: 0.254\ntestset: URL, BLEU: 28.7, chr-F: 0.453\ntestset: URL, BLEU: 48.5, chr-F: 0.646\ntestset: URL, BLEU: 31.4, chr-F: 0.509\ntestset: URL, BLEU: 58.1, chr-F: 0.728\ntestset: URL, BLEU: 25.1, chr-F: 0.406\ntestset: URL, BLEU: 40.8, chr-F: 0.570\ntestset: URL, BLEU: 20.3, chr-F: 0.380\ntestset: URL, BLEU: 20.5, chr-F: 0.315\ntestset: URL, BLEU: 16.0, chr-F: 0.366",
"### System Info:\n\n\n* hf\\_name: gmw-eng\n* source\\_languages: gmw\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']\n* src\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmw\n* tgt\\_alpha3: eng\n* short\\_pair: gmw-en\n* chrF2\\_score: 0.6459999999999999\n* bleu: 48.5\n* brevity\\_penalty: 0.997\n* ref\\_len: 72584.0\n* src\\_name: West Germanic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: gmw\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gmw-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #en #lb #af #de #fy #yi #gmw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### gmw-eng\n\n\n* source group: West Germanic languages\n* target group: English\n* OPUS readme: gmw-eng\n* model: transformer\n* source language(s): afr ang\\_Latn deu enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.538\ntestset: URL, BLEU: 25.7, chr-F: 0.534\ntestset: URL, BLEU: 25.1, chr-F: 0.530\ntestset: URL, BLEU: 27.9, chr-F: 0.565\ntestset: URL, BLEU: 25.3, chr-F: 0.539\ntestset: URL, BLEU: 26.6, chr-F: 0.548\ntestset: URL, BLEU: 29.6, chr-F: 0.565\ntestset: URL, BLEU: 30.2, chr-F: 0.571\ntestset: URL, BLEU: 31.5, chr-F: 0.577\ntestset: URL, BLEU: 36.7, chr-F: 0.622\ntestset: URL, BLEU: 32.3, chr-F: 0.585\ntestset: URL, BLEU: 39.9, chr-F: 0.638\ntestset: URL, BLEU: 35.9, chr-F: 0.611\ntestset: URL, BLEU: 61.8, chr-F: 0.750\ntestset: URL, BLEU: 7.3, chr-F: 0.220\ntestset: URL, BLEU: 48.3, chr-F: 0.657\ntestset: URL, BLEU: 16.1, chr-F: 0.423\ntestset: URL, BLEU: 7.0, chr-F: 0.168\ntestset: URL, BLEU: 28.6, chr-F: 0.488\ntestset: URL, BLEU: 15.5, chr-F: 0.326\ntestset: URL, BLEU: 12.7, chr-F: 0.308\ntestset: URL, BLEU: 8.4, chr-F: 0.254\ntestset: URL, BLEU: 28.7, chr-F: 0.453\ntestset: URL, BLEU: 48.5, chr-F: 0.646\ntestset: URL, BLEU: 31.4, chr-F: 0.509\ntestset: URL, BLEU: 58.1, chr-F: 0.728\ntestset: URL, BLEU: 25.1, chr-F: 0.406\ntestset: URL, BLEU: 40.8, chr-F: 0.570\ntestset: URL, BLEU: 20.3, chr-F: 0.380\ntestset: URL, BLEU: 20.5, chr-F: 0.315\ntestset: URL, BLEU: 16.0, chr-F: 0.366",
"### System Info:\n\n\n* hf\\_name: gmw-eng\n* source\\_languages: gmw\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']\n* src\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmw\n* tgt\\_alpha3: eng\n* short\\_pair: gmw-en\n* chrF2\\_score: 0.6459999999999999\n* bleu: 48.5\n* brevity\\_penalty: 0.997\n* ref\\_len: 72584.0\n* src\\_name: West Germanic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: gmw\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gmw-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
65,
851,
531
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #en #lb #af #de #fy #yi #gmw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### gmw-eng\n\n\n* source group: West Germanic languages\n* target group: English\n* OPUS readme: gmw-eng\n* model: transformer\n* source language(s): afr ang\\_Latn deu enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.538\ntestset: URL, BLEU: 25.7, chr-F: 0.534\ntestset: URL, BLEU: 25.1, chr-F: 0.530\ntestset: URL, BLEU: 27.9, chr-F: 0.565\ntestset: URL, BLEU: 25.3, chr-F: 0.539\ntestset: URL, BLEU: 26.6, chr-F: 0.548\ntestset: URL, BLEU: 29.6, chr-F: 0.565\ntestset: URL, BLEU: 30.2, chr-F: 0.571\ntestset: URL, BLEU: 31.5, chr-F: 0.577\ntestset: URL, BLEU: 36.7, chr-F: 0.622\ntestset: URL, BLEU: 32.3, chr-F: 0.585\ntestset: URL, BLEU: 39.9, chr-F: 0.638\ntestset: URL, BLEU: 35.9, chr-F: 0.611\ntestset: URL, BLEU: 61.8, chr-F: 0.750\ntestset: URL, BLEU: 7.3, chr-F: 0.220\ntestset: URL, BLEU: 48.3, chr-F: 0.657\ntestset: URL, BLEU: 16.1, chr-F: 0.423\ntestset: URL, BLEU: 7.0, chr-F: 0.168\ntestset: URL, BLEU: 28.6, chr-F: 0.488\ntestset: URL, BLEU: 15.5, chr-F: 0.326\ntestset: URL, BLEU: 12.7, chr-F: 0.308\ntestset: URL, BLEU: 8.4, chr-F: 0.254\ntestset: URL, BLEU: 28.7, chr-F: 0.453\ntestset: URL, BLEU: 48.5, chr-F: 0.646\ntestset: URL, BLEU: 31.4, chr-F: 0.509\ntestset: URL, BLEU: 58.1, chr-F: 0.728\ntestset: URL, BLEU: 25.1, chr-F: 0.406\ntestset: URL, BLEU: 40.8, chr-F: 0.570\ntestset: URL, BLEU: 20.3, chr-F: 0.380\ntestset: URL, BLEU: 20.5, chr-F: 0.315\ntestset: URL, BLEU: 16.0, chr-F: 0.366### System Info:\n\n\n* hf\\_name: gmw-eng\n* source\\_languages: gmw\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']\n* src\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmw\n* tgt\\_alpha3: eng\n* short\\_pair: gmw-en\n* chrF2\\_score: 0.6459999999999999\n* bleu: 48.5\n* brevity\\_penalty: 0.997\n* ref\\_len: 72584.0\n* src\\_name: West Germanic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: gmw\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: gmw-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### gmw-gmw
* source group: West Germanic languages
* target group: West Germanic languages
* OPUS readme: [gmw-gmw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md)
* model: transformer
* source language(s): afr ang_Latn deu eng enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): afr ang_Latn deu eng enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-deueng.deu.eng | 25.3 | 0.527 |
| newssyscomb2009-engdeu.eng.deu | 19.0 | 0.502 |
| news-test2008-deueng.deu.eng | 23.7 | 0.515 |
| news-test2008-engdeu.eng.deu | 19.2 | 0.491 |
| newstest2009-deueng.deu.eng | 23.1 | 0.514 |
| newstest2009-engdeu.eng.deu | 18.6 | 0.495 |
| newstest2010-deueng.deu.eng | 25.8 | 0.545 |
| newstest2010-engdeu.eng.deu | 20.3 | 0.505 |
| newstest2011-deueng.deu.eng | 23.7 | 0.523 |
| newstest2011-engdeu.eng.deu | 18.9 | 0.490 |
| newstest2012-deueng.deu.eng | 24.4 | 0.529 |
| newstest2012-engdeu.eng.deu | 19.2 | 0.489 |
| newstest2013-deueng.deu.eng | 27.2 | 0.545 |
| newstest2013-engdeu.eng.deu | 22.4 | 0.514 |
| newstest2014-deen-deueng.deu.eng | 27.0 | 0.546 |
| newstest2015-ende-deueng.deu.eng | 28.4 | 0.552 |
| newstest2015-ende-engdeu.eng.deu | 25.3 | 0.541 |
| newstest2016-ende-deueng.deu.eng | 33.2 | 0.595 |
| newstest2016-ende-engdeu.eng.deu | 29.8 | 0.578 |
| newstest2017-ende-deueng.deu.eng | 29.0 | 0.557 |
| newstest2017-ende-engdeu.eng.deu | 23.9 | 0.534 |
| newstest2018-ende-deueng.deu.eng | 35.9 | 0.607 |
| newstest2018-ende-engdeu.eng.deu | 34.8 | 0.609 |
| newstest2019-deen-deueng.deu.eng | 32.1 | 0.579 |
| newstest2019-ende-engdeu.eng.deu | 31.0 | 0.579 |
| Tatoeba-test.afr-ang.afr.ang | 0.0 | 0.065 |
| Tatoeba-test.afr-deu.afr.deu | 46.8 | 0.668 |
| Tatoeba-test.afr-eng.afr.eng | 58.5 | 0.728 |
| Tatoeba-test.afr-enm.afr.enm | 13.4 | 0.357 |
| Tatoeba-test.afr-fry.afr.fry | 5.3 | 0.026 |
| Tatoeba-test.afr-gos.afr.gos | 3.5 | 0.228 |
| Tatoeba-test.afr-ltz.afr.ltz | 1.6 | 0.131 |
| Tatoeba-test.afr-nld.afr.nld | 55.4 | 0.715 |
| Tatoeba-test.afr-yid.afr.yid | 3.4 | 0.008 |
| Tatoeba-test.ang-afr.ang.afr | 3.1 | 0.096 |
| Tatoeba-test.ang-deu.ang.deu | 2.6 | 0.188 |
| Tatoeba-test.ang-eng.ang.eng | 5.4 | 0.211 |
| Tatoeba-test.ang-enm.ang.enm | 1.7 | 0.197 |
| Tatoeba-test.ang-gos.ang.gos | 6.6 | 0.186 |
| Tatoeba-test.ang-ltz.ang.ltz | 5.3 | 0.072 |
| Tatoeba-test.ang-yid.ang.yid | 0.9 | 0.131 |
| Tatoeba-test.deu-afr.deu.afr | 52.7 | 0.699 |
| Tatoeba-test.deu-ang.deu.ang | 0.8 | 0.133 |
| Tatoeba-test.deu-eng.deu.eng | 43.5 | 0.621 |
| Tatoeba-test.deu-enm.deu.enm | 6.9 | 0.245 |
| Tatoeba-test.deu-frr.deu.frr | 0.8 | 0.200 |
| Tatoeba-test.deu-fry.deu.fry | 15.1 | 0.367 |
| Tatoeba-test.deu-gos.deu.gos | 2.2 | 0.279 |
| Tatoeba-test.deu-gsw.deu.gsw | 1.0 | 0.176 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.6 | 0.208 |
| Tatoeba-test.deu-ltz.deu.ltz | 12.1 | 0.274 |
| Tatoeba-test.deu-nds.deu.nds | 18.8 | 0.446 |
| Tatoeba-test.deu-nld.deu.nld | 48.6 | 0.669 |
| Tatoeba-test.deu-pdc.deu.pdc | 4.6 | 0.198 |
| Tatoeba-test.deu-sco.deu.sco | 12.0 | 0.340 |
| Tatoeba-test.deu-stq.deu.stq | 3.2 | 0.240 |
| Tatoeba-test.deu-swg.deu.swg | 0.5 | 0.179 |
| Tatoeba-test.deu-yid.deu.yid | 1.7 | 0.160 |
| Tatoeba-test.eng-afr.eng.afr | 55.8 | 0.730 |
| Tatoeba-test.eng-ang.eng.ang | 5.7 | 0.157 |
| Tatoeba-test.eng-deu.eng.deu | 36.7 | 0.584 |
| Tatoeba-test.eng-enm.eng.enm | 2.0 | 0.272 |
| Tatoeba-test.eng-frr.eng.frr | 6.1 | 0.246 |
| Tatoeba-test.eng-fry.eng.fry | 15.3 | 0.378 |
| Tatoeba-test.eng-gos.eng.gos | 1.2 | 0.242 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.164 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.9 | 0.170 |
| Tatoeba-test.eng-ltz.eng.ltz | 13.7 | 0.263 |
| Tatoeba-test.eng-nds.eng.nds | 17.1 | 0.410 |
| Tatoeba-test.eng-nld.eng.nld | 49.6 | 0.673 |
| Tatoeba-test.eng-pdc.eng.pdc | 5.1 | 0.218 |
| Tatoeba-test.eng-sco.eng.sco | 34.8 | 0.587 |
| Tatoeba-test.eng-stq.eng.stq | 2.1 | 0.322 |
| Tatoeba-test.eng-swg.eng.swg | 1.7 | 0.192 |
| Tatoeba-test.eng-yid.eng.yid | 1.7 | 0.173 |
| Tatoeba-test.enm-afr.enm.afr | 13.4 | 0.397 |
| Tatoeba-test.enm-ang.enm.ang | 0.7 | 0.063 |
| Tatoeba-test.enm-deu.enm.deu | 41.5 | 0.514 |
| Tatoeba-test.enm-eng.enm.eng | 21.3 | 0.483 |
| Tatoeba-test.enm-fry.enm.fry | 0.0 | 0.058 |
| Tatoeba-test.enm-gos.enm.gos | 10.7 | 0.354 |
| Tatoeba-test.enm-ksh.enm.ksh | 7.0 | 0.161 |
| Tatoeba-test.enm-nds.enm.nds | 18.6 | 0.316 |
| Tatoeba-test.enm-nld.enm.nld | 38.3 | 0.524 |
| Tatoeba-test.enm-yid.enm.yid | 0.7 | 0.128 |
| Tatoeba-test.frr-deu.frr.deu | 4.1 | 0.219 |
| Tatoeba-test.frr-eng.frr.eng | 14.1 | 0.186 |
| Tatoeba-test.frr-fry.frr.fry | 3.1 | 0.129 |
| Tatoeba-test.frr-gos.frr.gos | 3.6 | 0.226 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.145 |
| Tatoeba-test.frr-nld.frr.nld | 9.8 | 0.209 |
| Tatoeba-test.frr-stq.frr.stq | 2.8 | 0.142 |
| Tatoeba-test.fry-afr.fry.afr | 0.0 | 1.000 |
| Tatoeba-test.fry-deu.fry.deu | 30.1 | 0.535 |
| Tatoeba-test.fry-eng.fry.eng | 28.0 | 0.486 |
| Tatoeba-test.fry-enm.fry.enm | 16.0 | 0.262 |
| Tatoeba-test.fry-frr.fry.frr | 5.5 | 0.160 |
| Tatoeba-test.fry-gos.fry.gos | 1.6 | 0.307 |
| Tatoeba-test.fry-ltz.fry.ltz | 30.4 | 0.438 |
| Tatoeba-test.fry-nds.fry.nds | 8.1 | 0.083 |
| Tatoeba-test.fry-nld.fry.nld | 41.4 | 0.616 |
| Tatoeba-test.fry-stq.fry.stq | 1.6 | 0.217 |
| Tatoeba-test.fry-yid.fry.yid | 1.6 | 0.159 |
| Tatoeba-test.gos-afr.gos.afr | 6.3 | 0.318 |
| Tatoeba-test.gos-ang.gos.ang | 6.2 | 0.058 |
| Tatoeba-test.gos-deu.gos.deu | 11.7 | 0.363 |
| Tatoeba-test.gos-eng.gos.eng | 14.9 | 0.322 |
| Tatoeba-test.gos-enm.gos.enm | 9.1 | 0.398 |
| Tatoeba-test.gos-frr.gos.frr | 3.3 | 0.117 |
| Tatoeba-test.gos-fry.gos.fry | 13.1 | 0.387 |
| Tatoeba-test.gos-ltz.gos.ltz | 3.1 | 0.154 |
| Tatoeba-test.gos-nds.gos.nds | 2.4 | 0.206 |
| Tatoeba-test.gos-nld.gos.nld | 13.9 | 0.395 |
| Tatoeba-test.gos-stq.gos.stq | 2.1 | 0.209 |
| Tatoeba-test.gos-yid.gos.yid | 1.7 | 0.147 |
| Tatoeba-test.gsw-deu.gsw.deu | 10.5 | 0.350 |
| Tatoeba-test.gsw-eng.gsw.eng | 10.7 | 0.299 |
| Tatoeba-test.ksh-deu.ksh.deu | 12.0 | 0.373 |
| Tatoeba-test.ksh-eng.ksh.eng | 3.2 | 0.225 |
| Tatoeba-test.ksh-enm.ksh.enm | 13.4 | 0.308 |
| Tatoeba-test.ltz-afr.ltz.afr | 37.4 | 0.525 |
| Tatoeba-test.ltz-ang.ltz.ang | 2.8 | 0.036 |
| Tatoeba-test.ltz-deu.ltz.deu | 40.3 | 0.596 |
| Tatoeba-test.ltz-eng.ltz.eng | 31.7 | 0.490 |
| Tatoeba-test.ltz-fry.ltz.fry | 36.3 | 0.658 |
| Tatoeba-test.ltz-gos.ltz.gos | 2.9 | 0.209 |
| Tatoeba-test.ltz-nld.ltz.nld | 38.8 | 0.530 |
| Tatoeba-test.ltz-stq.ltz.stq | 5.8 | 0.165 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.0 | 0.159 |
| Tatoeba-test.multi.multi | 36.4 | 0.568 |
| Tatoeba-test.nds-deu.nds.deu | 35.0 | 0.573 |
| Tatoeba-test.nds-eng.nds.eng | 29.6 | 0.495 |
| Tatoeba-test.nds-enm.nds.enm | 3.7 | 0.194 |
| Tatoeba-test.nds-frr.nds.frr | 6.6 | 0.133 |
| Tatoeba-test.nds-fry.nds.fry | 4.2 | 0.087 |
| Tatoeba-test.nds-gos.nds.gos | 2.0 | 0.243 |
| Tatoeba-test.nds-nld.nds.nld | 41.4 | 0.618 |
| Tatoeba-test.nds-swg.nds.swg | 0.6 | 0.178 |
| Tatoeba-test.nds-yid.nds.yid | 8.3 | 0.238 |
| Tatoeba-test.nld-afr.nld.afr | 59.4 | 0.759 |
| Tatoeba-test.nld-deu.nld.deu | 49.9 | 0.685 |
| Tatoeba-test.nld-eng.nld.eng | 54.1 | 0.699 |
| Tatoeba-test.nld-enm.nld.enm | 5.0 | 0.250 |
| Tatoeba-test.nld-frr.nld.frr | 2.4 | 0.224 |
| Tatoeba-test.nld-fry.nld.fry | 19.4 | 0.446 |
| Tatoeba-test.nld-gos.nld.gos | 2.5 | 0.273 |
| Tatoeba-test.nld-ltz.nld.ltz | 13.8 | 0.292 |
| Tatoeba-test.nld-nds.nld.nds | 21.3 | 0.457 |
| Tatoeba-test.nld-sco.nld.sco | 14.7 | 0.423 |
| Tatoeba-test.nld-stq.nld.stq | 1.9 | 0.257 |
| Tatoeba-test.nld-swg.nld.swg | 4.2 | 0.162 |
| Tatoeba-test.nld-yid.nld.yid | 2.6 | 0.186 |
| Tatoeba-test.pdc-deu.pdc.deu | 39.7 | 0.529 |
| Tatoeba-test.pdc-eng.pdc.eng | 25.0 | 0.427 |
| Tatoeba-test.sco-deu.sco.deu | 28.4 | 0.428 |
| Tatoeba-test.sco-eng.sco.eng | 41.8 | 0.595 |
| Tatoeba-test.sco-nld.sco.nld | 36.4 | 0.565 |
| Tatoeba-test.stq-deu.stq.deu | 7.7 | 0.328 |
| Tatoeba-test.stq-eng.stq.eng | 21.1 | 0.428 |
| Tatoeba-test.stq-frr.stq.frr | 2.0 | 0.118 |
| Tatoeba-test.stq-fry.stq.fry | 6.3 | 0.255 |
| Tatoeba-test.stq-gos.stq.gos | 1.4 | 0.244 |
| Tatoeba-test.stq-ltz.stq.ltz | 4.4 | 0.204 |
| Tatoeba-test.stq-nld.stq.nld | 10.7 | 0.371 |
| Tatoeba-test.stq-yid.stq.yid | 1.4 | 0.105 |
| Tatoeba-test.swg-deu.swg.deu | 9.5 | 0.343 |
| Tatoeba-test.swg-eng.swg.eng | 15.1 | 0.306 |
| Tatoeba-test.swg-nds.swg.nds | 0.7 | 0.196 |
| Tatoeba-test.swg-nld.swg.nld | 11.6 | 0.308 |
| Tatoeba-test.swg-yid.swg.yid | 0.9 | 0.186 |
| Tatoeba-test.yid-afr.yid.afr | 100.0 | 1.000 |
| Tatoeba-test.yid-ang.yid.ang | 0.6 | 0.079 |
| Tatoeba-test.yid-deu.yid.deu | 16.7 | 0.372 |
| Tatoeba-test.yid-eng.yid.eng | 15.8 | 0.344 |
| Tatoeba-test.yid-enm.yid.enm | 1.3 | 0.166 |
| Tatoeba-test.yid-fry.yid.fry | 5.6 | 0.157 |
| Tatoeba-test.yid-gos.yid.gos | 2.2 | 0.160 |
| Tatoeba-test.yid-ltz.yid.ltz | 2.1 | 0.238 |
| Tatoeba-test.yid-nds.yid.nds | 14.4 | 0.365 |
| Tatoeba-test.yid-nld.yid.nld | 20.9 | 0.397 |
| Tatoeba-test.yid-stq.yid.stq | 3.7 | 0.165 |
| Tatoeba-test.yid-swg.yid.swg | 1.8 | 0.156 |
### System Info:
- hf_name: gmw-gmw
- source_languages: gmw
- target_languages: gmw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/gmw-gmw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
- src_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- tgt_constituents: {'ksh', 'nld', 'eng', 'enm_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-gmw/opus-2020-07-27.test.txt
- src_alpha3: gmw
- tgt_alpha3: gmw
- short_pair: gmw-gmw
- chrF2_score: 0.568
- bleu: 36.4
- brevity_penalty: 1.0
- ref_len: 72534.0
- src_name: West Germanic languages
- tgt_name: West Germanic languages
- train_date: 2020-07-27
- src_alpha2: gmw
- tgt_alpha2: gmw
- prefer_old: False
- long_pair: gmw-gmw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["nl", "en", "lb", "af", "de", "fy", "yi", "gmw"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gmw-gmw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"nl",
"en",
"lb",
"af",
"de",
"fy",
"yi",
"gmw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"nl",
"en",
"lb",
"af",
"de",
"fy",
"yi",
"gmw"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #nl #en #lb #af #de #fy #yi #gmw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### gmw-gmw
* source group: West Germanic languages
* target group: West Germanic languages
* OPUS readme: gmw-gmw
* model: transformer
* source language(s): afr ang\_Latn deu eng enm\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* target language(s): afr ang\_Latn deu eng enm\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.3, chr-F: 0.527
testset: URL, BLEU: 19.0, chr-F: 0.502
testset: URL, BLEU: 23.7, chr-F: 0.515
testset: URL, BLEU: 19.2, chr-F: 0.491
testset: URL, BLEU: 23.1, chr-F: 0.514
testset: URL, BLEU: 18.6, chr-F: 0.495
testset: URL, BLEU: 25.8, chr-F: 0.545
testset: URL, BLEU: 20.3, chr-F: 0.505
testset: URL, BLEU: 23.7, chr-F: 0.523
testset: URL, BLEU: 18.9, chr-F: 0.490
testset: URL, BLEU: 24.4, chr-F: 0.529
testset: URL, BLEU: 19.2, chr-F: 0.489
testset: URL, BLEU: 27.2, chr-F: 0.545
testset: URL, BLEU: 22.4, chr-F: 0.514
testset: URL, BLEU: 27.0, chr-F: 0.546
testset: URL, BLEU: 28.4, chr-F: 0.552
testset: URL, BLEU: 25.3, chr-F: 0.541
testset: URL, BLEU: 33.2, chr-F: 0.595
testset: URL, BLEU: 29.8, chr-F: 0.578
testset: URL, BLEU: 29.0, chr-F: 0.557
testset: URL, BLEU: 23.9, chr-F: 0.534
testset: URL, BLEU: 35.9, chr-F: 0.607
testset: URL, BLEU: 34.8, chr-F: 0.609
testset: URL, BLEU: 32.1, chr-F: 0.579
testset: URL, BLEU: 31.0, chr-F: 0.579
testset: URL, BLEU: 0.0, chr-F: 0.065
testset: URL, BLEU: 46.8, chr-F: 0.668
testset: URL, BLEU: 58.5, chr-F: 0.728
testset: URL, BLEU: 13.4, chr-F: 0.357
testset: URL, BLEU: 5.3, chr-F: 0.026
testset: URL, BLEU: 3.5, chr-F: 0.228
testset: URL, BLEU: 1.6, chr-F: 0.131
testset: URL, BLEU: 55.4, chr-F: 0.715
testset: URL, BLEU: 3.4, chr-F: 0.008
testset: URL, BLEU: 3.1, chr-F: 0.096
testset: URL, BLEU: 2.6, chr-F: 0.188
testset: URL, BLEU: 5.4, chr-F: 0.211
testset: URL, BLEU: 1.7, chr-F: 0.197
testset: URL, BLEU: 6.6, chr-F: 0.186
testset: URL, BLEU: 5.3, chr-F: 0.072
testset: URL, BLEU: 0.9, chr-F: 0.131
testset: URL, BLEU: 52.7, chr-F: 0.699
testset: URL, BLEU: 0.8, chr-F: 0.133
testset: URL, BLEU: 43.5, chr-F: 0.621
testset: URL, BLEU: 6.9, chr-F: 0.245
testset: URL, BLEU: 0.8, chr-F: 0.200
testset: URL, BLEU: 15.1, chr-F: 0.367
testset: URL, BLEU: 2.2, chr-F: 0.279
testset: URL, BLEU: 1.0, chr-F: 0.176
testset: URL, BLEU: 0.6, chr-F: 0.208
testset: URL, BLEU: 12.1, chr-F: 0.274
testset: URL, BLEU: 18.8, chr-F: 0.446
testset: URL, BLEU: 48.6, chr-F: 0.669
testset: URL, BLEU: 4.6, chr-F: 0.198
testset: URL, BLEU: 12.0, chr-F: 0.340
testset: URL, BLEU: 3.2, chr-F: 0.240
testset: URL, BLEU: 0.5, chr-F: 0.179
testset: URL, BLEU: 1.7, chr-F: 0.160
testset: URL, BLEU: 55.8, chr-F: 0.730
testset: URL, BLEU: 5.7, chr-F: 0.157
testset: URL, BLEU: 36.7, chr-F: 0.584
testset: URL, BLEU: 2.0, chr-F: 0.272
testset: URL, BLEU: 6.1, chr-F: 0.246
testset: URL, BLEU: 15.3, chr-F: 0.378
testset: URL, BLEU: 1.2, chr-F: 0.242
testset: URL, BLEU: 0.9, chr-F: 0.164
testset: URL, BLEU: 0.9, chr-F: 0.170
testset: URL, BLEU: 13.7, chr-F: 0.263
testset: URL, BLEU: 17.1, chr-F: 0.410
testset: URL, BLEU: 49.6, chr-F: 0.673
testset: URL, BLEU: 5.1, chr-F: 0.218
testset: URL, BLEU: 34.8, chr-F: 0.587
testset: URL, BLEU: 2.1, chr-F: 0.322
testset: URL, BLEU: 1.7, chr-F: 0.192
testset: URL, BLEU: 1.7, chr-F: 0.173
testset: URL, BLEU: 13.4, chr-F: 0.397
testset: URL, BLEU: 0.7, chr-F: 0.063
testset: URL, BLEU: 41.5, chr-F: 0.514
testset: URL, BLEU: 21.3, chr-F: 0.483
testset: URL, BLEU: 0.0, chr-F: 0.058
testset: URL, BLEU: 10.7, chr-F: 0.354
testset: URL, BLEU: 7.0, chr-F: 0.161
testset: URL, BLEU: 18.6, chr-F: 0.316
testset: URL, BLEU: 38.3, chr-F: 0.524
testset: URL, BLEU: 0.7, chr-F: 0.128
testset: URL, BLEU: 4.1, chr-F: 0.219
testset: URL, BLEU: 14.1, chr-F: 0.186
testset: URL, BLEU: 3.1, chr-F: 0.129
testset: URL, BLEU: 3.6, chr-F: 0.226
testset: URL, BLEU: 12.4, chr-F: 0.145
testset: URL, BLEU: 9.8, chr-F: 0.209
testset: URL, BLEU: 2.8, chr-F: 0.142
testset: URL, BLEU: 0.0, chr-F: 1.000
testset: URL, BLEU: 30.1, chr-F: 0.535
testset: URL, BLEU: 28.0, chr-F: 0.486
testset: URL, BLEU: 16.0, chr-F: 0.262
testset: URL, BLEU: 5.5, chr-F: 0.160
testset: URL, BLEU: 1.6, chr-F: 0.307
testset: URL, BLEU: 30.4, chr-F: 0.438
testset: URL, BLEU: 8.1, chr-F: 0.083
testset: URL, BLEU: 41.4, chr-F: 0.616
testset: URL, BLEU: 1.6, chr-F: 0.217
testset: URL, BLEU: 1.6, chr-F: 0.159
testset: URL, BLEU: 6.3, chr-F: 0.318
testset: URL, BLEU: 6.2, chr-F: 0.058
testset: URL, BLEU: 11.7, chr-F: 0.363
testset: URL, BLEU: 14.9, chr-F: 0.322
testset: URL, BLEU: 9.1, chr-F: 0.398
testset: URL, BLEU: 3.3, chr-F: 0.117
testset: URL, BLEU: 13.1, chr-F: 0.387
testset: URL, BLEU: 3.1, chr-F: 0.154
testset: URL, BLEU: 2.4, chr-F: 0.206
testset: URL, BLEU: 13.9, chr-F: 0.395
testset: URL, BLEU: 2.1, chr-F: 0.209
testset: URL, BLEU: 1.7, chr-F: 0.147
testset: URL, BLEU: 10.5, chr-F: 0.350
testset: URL, BLEU: 10.7, chr-F: 0.299
testset: URL, BLEU: 12.0, chr-F: 0.373
testset: URL, BLEU: 3.2, chr-F: 0.225
testset: URL, BLEU: 13.4, chr-F: 0.308
testset: URL, BLEU: 37.4, chr-F: 0.525
testset: URL, BLEU: 2.8, chr-F: 0.036
testset: URL, BLEU: 40.3, chr-F: 0.596
testset: URL, BLEU: 31.7, chr-F: 0.490
testset: URL, BLEU: 36.3, chr-F: 0.658
testset: URL, BLEU: 2.9, chr-F: 0.209
testset: URL, BLEU: 38.8, chr-F: 0.530
testset: URL, BLEU: 5.8, chr-F: 0.165
testset: URL, BLEU: 1.0, chr-F: 0.159
testset: URL, BLEU: 36.4, chr-F: 0.568
testset: URL, BLEU: 35.0, chr-F: 0.573
testset: URL, BLEU: 29.6, chr-F: 0.495
testset: URL, BLEU: 3.7, chr-F: 0.194
testset: URL, BLEU: 6.6, chr-F: 0.133
testset: URL, BLEU: 4.2, chr-F: 0.087
testset: URL, BLEU: 2.0, chr-F: 0.243
testset: URL, BLEU: 41.4, chr-F: 0.618
testset: URL, BLEU: 0.6, chr-F: 0.178
testset: URL, BLEU: 8.3, chr-F: 0.238
testset: URL, BLEU: 59.4, chr-F: 0.759
testset: URL, BLEU: 49.9, chr-F: 0.685
testset: URL, BLEU: 54.1, chr-F: 0.699
testset: URL, BLEU: 5.0, chr-F: 0.250
testset: URL, BLEU: 2.4, chr-F: 0.224
testset: URL, BLEU: 19.4, chr-F: 0.446
testset: URL, BLEU: 2.5, chr-F: 0.273
testset: URL, BLEU: 13.8, chr-F: 0.292
testset: URL, BLEU: 21.3, chr-F: 0.457
testset: URL, BLEU: 14.7, chr-F: 0.423
testset: URL, BLEU: 1.9, chr-F: 0.257
testset: URL, BLEU: 4.2, chr-F: 0.162
testset: URL, BLEU: 2.6, chr-F: 0.186
testset: URL, BLEU: 39.7, chr-F: 0.529
testset: URL, BLEU: 25.0, chr-F: 0.427
testset: URL, BLEU: 28.4, chr-F: 0.428
testset: URL, BLEU: 41.8, chr-F: 0.595
testset: URL, BLEU: 36.4, chr-F: 0.565
testset: URL, BLEU: 7.7, chr-F: 0.328
testset: URL, BLEU: 21.1, chr-F: 0.428
testset: URL, BLEU: 2.0, chr-F: 0.118
testset: URL, BLEU: 6.3, chr-F: 0.255
testset: URL, BLEU: 1.4, chr-F: 0.244
testset: URL, BLEU: 4.4, chr-F: 0.204
testset: URL, BLEU: 10.7, chr-F: 0.371
testset: URL, BLEU: 1.4, chr-F: 0.105
testset: URL, BLEU: 9.5, chr-F: 0.343
testset: URL, BLEU: 15.1, chr-F: 0.306
testset: URL, BLEU: 0.7, chr-F: 0.196
testset: URL, BLEU: 11.6, chr-F: 0.308
testset: URL, BLEU: 0.9, chr-F: 0.186
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 0.6, chr-F: 0.079
testset: URL, BLEU: 16.7, chr-F: 0.372
testset: URL, BLEU: 15.8, chr-F: 0.344
testset: URL, BLEU: 1.3, chr-F: 0.166
testset: URL, BLEU: 5.6, chr-F: 0.157
testset: URL, BLEU: 2.2, chr-F: 0.160
testset: URL, BLEU: 2.1, chr-F: 0.238
testset: URL, BLEU: 14.4, chr-F: 0.365
testset: URL, BLEU: 20.9, chr-F: 0.397
testset: URL, BLEU: 3.7, chr-F: 0.165
testset: URL, BLEU: 1.8, chr-F: 0.156
### System Info:
* hf\_name: gmw-gmw
* source\_languages: gmw
* target\_languages: gmw
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']
* src\_constituents: {'ksh', 'nld', 'eng', 'enm\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
* tgt\_constituents: {'ksh', 'nld', 'eng', 'enm\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: gmw
* tgt\_alpha3: gmw
* short\_pair: gmw-gmw
* chrF2\_score: 0.568
* bleu: 36.4
* brevity\_penalty: 1.0
* ref\_len: 72534.0
* src\_name: West Germanic languages
* tgt\_name: West Germanic languages
* train\_date: 2020-07-27
* src\_alpha2: gmw
* tgt\_alpha2: gmw
* prefer\_old: False
* long\_pair: gmw-gmw
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### gmw-gmw\n\n\n* source group: West Germanic languages\n* target group: West Germanic languages\n* OPUS readme: gmw-gmw\n* model: transformer\n* source language(s): afr ang\\_Latn deu eng enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* target language(s): afr ang\\_Latn deu eng enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.527\ntestset: URL, BLEU: 19.0, chr-F: 0.502\ntestset: URL, BLEU: 23.7, chr-F: 0.515\ntestset: URL, BLEU: 19.2, chr-F: 0.491\ntestset: URL, BLEU: 23.1, chr-F: 0.514\ntestset: URL, BLEU: 18.6, chr-F: 0.495\ntestset: URL, BLEU: 25.8, chr-F: 0.545\ntestset: URL, BLEU: 20.3, chr-F: 0.505\ntestset: URL, BLEU: 23.7, chr-F: 0.523\ntestset: URL, BLEU: 18.9, chr-F: 0.490\ntestset: URL, BLEU: 24.4, chr-F: 0.529\ntestset: URL, BLEU: 19.2, chr-F: 0.489\ntestset: URL, BLEU: 27.2, chr-F: 0.545\ntestset: URL, BLEU: 22.4, chr-F: 0.514\ntestset: URL, BLEU: 27.0, chr-F: 0.546\ntestset: URL, BLEU: 28.4, chr-F: 0.552\ntestset: URL, BLEU: 25.3, chr-F: 0.541\ntestset: URL, BLEU: 33.2, chr-F: 0.595\ntestset: URL, BLEU: 29.8, chr-F: 0.578\ntestset: URL, BLEU: 29.0, chr-F: 0.557\ntestset: URL, BLEU: 23.9, chr-F: 0.534\ntestset: URL, BLEU: 35.9, chr-F: 0.607\ntestset: URL, BLEU: 34.8, chr-F: 0.609\ntestset: URL, BLEU: 32.1, chr-F: 0.579\ntestset: URL, BLEU: 31.0, chr-F: 0.579\ntestset: URL, BLEU: 0.0, chr-F: 0.065\ntestset: URL, BLEU: 46.8, chr-F: 0.668\ntestset: URL, BLEU: 58.5, chr-F: 0.728\ntestset: URL, BLEU: 13.4, chr-F: 0.357\ntestset: URL, BLEU: 5.3, chr-F: 0.026\ntestset: URL, BLEU: 3.5, chr-F: 0.228\ntestset: URL, BLEU: 1.6, chr-F: 0.131\ntestset: URL, BLEU: 55.4, chr-F: 0.715\ntestset: URL, BLEU: 3.4, chr-F: 0.008\ntestset: URL, BLEU: 3.1, chr-F: 0.096\ntestset: URL, BLEU: 2.6, chr-F: 0.188\ntestset: URL, BLEU: 5.4, chr-F: 0.211\ntestset: URL, BLEU: 1.7, chr-F: 0.197\ntestset: URL, BLEU: 6.6, chr-F: 0.186\ntestset: URL, BLEU: 5.3, chr-F: 0.072\ntestset: URL, BLEU: 0.9, chr-F: 0.131\ntestset: URL, BLEU: 52.7, chr-F: 0.699\ntestset: URL, BLEU: 0.8, chr-F: 0.133\ntestset: URL, BLEU: 43.5, chr-F: 0.621\ntestset: URL, BLEU: 6.9, chr-F: 0.245\ntestset: URL, BLEU: 0.8, chr-F: 0.200\ntestset: URL, BLEU: 15.1, chr-F: 0.367\ntestset: URL, BLEU: 2.2, chr-F: 0.279\ntestset: URL, BLEU: 1.0, chr-F: 0.176\ntestset: URL, BLEU: 0.6, chr-F: 0.208\ntestset: URL, BLEU: 12.1, chr-F: 0.274\ntestset: URL, BLEU: 18.8, chr-F: 0.446\ntestset: URL, BLEU: 48.6, chr-F: 0.669\ntestset: URL, BLEU: 4.6, chr-F: 0.198\ntestset: URL, BLEU: 12.0, chr-F: 0.340\ntestset: URL, BLEU: 3.2, chr-F: 0.240\ntestset: URL, BLEU: 0.5, chr-F: 0.179\ntestset: URL, BLEU: 1.7, chr-F: 0.160\ntestset: URL, BLEU: 55.8, chr-F: 0.730\ntestset: URL, BLEU: 5.7, chr-F: 0.157\ntestset: URL, BLEU: 36.7, chr-F: 0.584\ntestset: URL, BLEU: 2.0, chr-F: 0.272\ntestset: URL, BLEU: 6.1, chr-F: 0.246\ntestset: URL, BLEU: 15.3, chr-F: 0.378\ntestset: URL, BLEU: 1.2, chr-F: 0.242\ntestset: URL, BLEU: 0.9, chr-F: 0.164\ntestset: URL, BLEU: 0.9, chr-F: 0.170\ntestset: URL, BLEU: 13.7, chr-F: 0.263\ntestset: URL, BLEU: 17.1, chr-F: 0.410\ntestset: URL, BLEU: 49.6, chr-F: 0.673\ntestset: URL, BLEU: 5.1, chr-F: 0.218\ntestset: URL, BLEU: 34.8, chr-F: 0.587\ntestset: URL, BLEU: 2.1, chr-F: 0.322\ntestset: URL, BLEU: 1.7, chr-F: 0.192\ntestset: URL, BLEU: 1.7, chr-F: 0.173\ntestset: URL, BLEU: 13.4, chr-F: 0.397\ntestset: URL, BLEU: 0.7, chr-F: 0.063\ntestset: URL, BLEU: 41.5, chr-F: 0.514\ntestset: URL, BLEU: 21.3, chr-F: 0.483\ntestset: URL, BLEU: 0.0, chr-F: 0.058\ntestset: URL, BLEU: 10.7, chr-F: 0.354\ntestset: URL, BLEU: 7.0, chr-F: 0.161\ntestset: URL, BLEU: 18.6, chr-F: 0.316\ntestset: URL, BLEU: 38.3, chr-F: 0.524\ntestset: URL, BLEU: 0.7, chr-F: 0.128\ntestset: URL, BLEU: 4.1, chr-F: 0.219\ntestset: URL, BLEU: 14.1, chr-F: 0.186\ntestset: URL, BLEU: 3.1, chr-F: 0.129\ntestset: URL, BLEU: 3.6, chr-F: 0.226\ntestset: URL, BLEU: 12.4, chr-F: 0.145\ntestset: URL, BLEU: 9.8, chr-F: 0.209\ntestset: URL, BLEU: 2.8, chr-F: 0.142\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 30.1, chr-F: 0.535\ntestset: URL, BLEU: 28.0, chr-F: 0.486\ntestset: URL, BLEU: 16.0, chr-F: 0.262\ntestset: URL, BLEU: 5.5, chr-F: 0.160\ntestset: URL, BLEU: 1.6, chr-F: 0.307\ntestset: URL, BLEU: 30.4, chr-F: 0.438\ntestset: URL, BLEU: 8.1, chr-F: 0.083\ntestset: URL, BLEU: 41.4, chr-F: 0.616\ntestset: URL, BLEU: 1.6, chr-F: 0.217\ntestset: URL, BLEU: 1.6, chr-F: 0.159\ntestset: URL, BLEU: 6.3, chr-F: 0.318\ntestset: URL, BLEU: 6.2, chr-F: 0.058\ntestset: URL, BLEU: 11.7, chr-F: 0.363\ntestset: URL, BLEU: 14.9, chr-F: 0.322\ntestset: URL, BLEU: 9.1, chr-F: 0.398\ntestset: URL, BLEU: 3.3, chr-F: 0.117\ntestset: URL, BLEU: 13.1, chr-F: 0.387\ntestset: URL, BLEU: 3.1, chr-F: 0.154\ntestset: URL, BLEU: 2.4, chr-F: 0.206\ntestset: URL, BLEU: 13.9, chr-F: 0.395\ntestset: URL, BLEU: 2.1, chr-F: 0.209\ntestset: URL, BLEU: 1.7, chr-F: 0.147\ntestset: URL, BLEU: 10.5, chr-F: 0.350\ntestset: URL, BLEU: 10.7, chr-F: 0.299\ntestset: URL, BLEU: 12.0, chr-F: 0.373\ntestset: URL, BLEU: 3.2, chr-F: 0.225\ntestset: URL, BLEU: 13.4, chr-F: 0.308\ntestset: URL, BLEU: 37.4, chr-F: 0.525\ntestset: URL, BLEU: 2.8, chr-F: 0.036\ntestset: URL, BLEU: 40.3, chr-F: 0.596\ntestset: URL, BLEU: 31.7, chr-F: 0.490\ntestset: URL, BLEU: 36.3, chr-F: 0.658\ntestset: URL, BLEU: 2.9, chr-F: 0.209\ntestset: URL, BLEU: 38.8, chr-F: 0.530\ntestset: URL, BLEU: 5.8, chr-F: 0.165\ntestset: URL, BLEU: 1.0, chr-F: 0.159\ntestset: URL, BLEU: 36.4, chr-F: 0.568\ntestset: URL, BLEU: 35.0, chr-F: 0.573\ntestset: URL, BLEU: 29.6, chr-F: 0.495\ntestset: URL, BLEU: 3.7, chr-F: 0.194\ntestset: URL, BLEU: 6.6, chr-F: 0.133\ntestset: URL, BLEU: 4.2, chr-F: 0.087\ntestset: URL, BLEU: 2.0, chr-F: 0.243\ntestset: URL, BLEU: 41.4, chr-F: 0.618\ntestset: URL, BLEU: 0.6, chr-F: 0.178\ntestset: URL, BLEU: 8.3, chr-F: 0.238\ntestset: URL, BLEU: 59.4, chr-F: 0.759\ntestset: URL, BLEU: 49.9, chr-F: 0.685\ntestset: URL, BLEU: 54.1, chr-F: 0.699\ntestset: URL, BLEU: 5.0, chr-F: 0.250\ntestset: URL, BLEU: 2.4, chr-F: 0.224\ntestset: URL, BLEU: 19.4, chr-F: 0.446\ntestset: URL, BLEU: 2.5, chr-F: 0.273\ntestset: URL, BLEU: 13.8, chr-F: 0.292\ntestset: URL, BLEU: 21.3, chr-F: 0.457\ntestset: URL, BLEU: 14.7, chr-F: 0.423\ntestset: URL, BLEU: 1.9, chr-F: 0.257\ntestset: URL, BLEU: 4.2, chr-F: 0.162\ntestset: URL, BLEU: 2.6, chr-F: 0.186\ntestset: URL, BLEU: 39.7, chr-F: 0.529\ntestset: URL, BLEU: 25.0, chr-F: 0.427\ntestset: URL, BLEU: 28.4, chr-F: 0.428\ntestset: URL, BLEU: 41.8, chr-F: 0.595\ntestset: URL, BLEU: 36.4, chr-F: 0.565\ntestset: URL, BLEU: 7.7, chr-F: 0.328\ntestset: URL, BLEU: 21.1, chr-F: 0.428\ntestset: URL, BLEU: 2.0, chr-F: 0.118\ntestset: URL, BLEU: 6.3, chr-F: 0.255\ntestset: URL, BLEU: 1.4, chr-F: 0.244\ntestset: URL, BLEU: 4.4, chr-F: 0.204\ntestset: URL, BLEU: 10.7, chr-F: 0.371\ntestset: URL, BLEU: 1.4, chr-F: 0.105\ntestset: URL, BLEU: 9.5, chr-F: 0.343\ntestset: URL, BLEU: 15.1, chr-F: 0.306\ntestset: URL, BLEU: 0.7, chr-F: 0.196\ntestset: URL, BLEU: 11.6, chr-F: 0.308\ntestset: URL, BLEU: 0.9, chr-F: 0.186\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 0.6, chr-F: 0.079\ntestset: URL, BLEU: 16.7, chr-F: 0.372\ntestset: URL, BLEU: 15.8, chr-F: 0.344\ntestset: URL, BLEU: 1.3, chr-F: 0.166\ntestset: URL, BLEU: 5.6, chr-F: 0.157\ntestset: URL, BLEU: 2.2, chr-F: 0.160\ntestset: URL, BLEU: 2.1, chr-F: 0.238\ntestset: URL, BLEU: 14.4, chr-F: 0.365\ntestset: URL, BLEU: 20.9, chr-F: 0.397\ntestset: URL, BLEU: 3.7, chr-F: 0.165\ntestset: URL, BLEU: 1.8, chr-F: 0.156",
"### System Info:\n\n\n* hf\\_name: gmw-gmw\n* source\\_languages: gmw\n* target\\_languages: gmw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']\n* src\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* tgt\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmw\n* tgt\\_alpha3: gmw\n* short\\_pair: gmw-gmw\n* chrF2\\_score: 0.568\n* bleu: 36.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 72534.0\n* src\\_name: West Germanic languages\n* tgt\\_name: West Germanic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: gmw\n* tgt\\_alpha2: gmw\n* prefer\\_old: False\n* long\\_pair: gmw-gmw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #en #lb #af #de #fy #yi #gmw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### gmw-gmw\n\n\n* source group: West Germanic languages\n* target group: West Germanic languages\n* OPUS readme: gmw-gmw\n* model: transformer\n* source language(s): afr ang\\_Latn deu eng enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* target language(s): afr ang\\_Latn deu eng enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.527\ntestset: URL, BLEU: 19.0, chr-F: 0.502\ntestset: URL, BLEU: 23.7, chr-F: 0.515\ntestset: URL, BLEU: 19.2, chr-F: 0.491\ntestset: URL, BLEU: 23.1, chr-F: 0.514\ntestset: URL, BLEU: 18.6, chr-F: 0.495\ntestset: URL, BLEU: 25.8, chr-F: 0.545\ntestset: URL, BLEU: 20.3, chr-F: 0.505\ntestset: URL, BLEU: 23.7, chr-F: 0.523\ntestset: URL, BLEU: 18.9, chr-F: 0.490\ntestset: URL, BLEU: 24.4, chr-F: 0.529\ntestset: URL, BLEU: 19.2, chr-F: 0.489\ntestset: URL, BLEU: 27.2, chr-F: 0.545\ntestset: URL, BLEU: 22.4, chr-F: 0.514\ntestset: URL, BLEU: 27.0, chr-F: 0.546\ntestset: URL, BLEU: 28.4, chr-F: 0.552\ntestset: URL, BLEU: 25.3, chr-F: 0.541\ntestset: URL, BLEU: 33.2, chr-F: 0.595\ntestset: URL, BLEU: 29.8, chr-F: 0.578\ntestset: URL, BLEU: 29.0, chr-F: 0.557\ntestset: URL, BLEU: 23.9, chr-F: 0.534\ntestset: URL, BLEU: 35.9, chr-F: 0.607\ntestset: URL, BLEU: 34.8, chr-F: 0.609\ntestset: URL, BLEU: 32.1, chr-F: 0.579\ntestset: URL, BLEU: 31.0, chr-F: 0.579\ntestset: URL, BLEU: 0.0, chr-F: 0.065\ntestset: URL, BLEU: 46.8, chr-F: 0.668\ntestset: URL, BLEU: 58.5, chr-F: 0.728\ntestset: URL, BLEU: 13.4, chr-F: 0.357\ntestset: URL, BLEU: 5.3, chr-F: 0.026\ntestset: URL, BLEU: 3.5, chr-F: 0.228\ntestset: URL, BLEU: 1.6, chr-F: 0.131\ntestset: URL, BLEU: 55.4, chr-F: 0.715\ntestset: URL, BLEU: 3.4, chr-F: 0.008\ntestset: URL, BLEU: 3.1, chr-F: 0.096\ntestset: URL, BLEU: 2.6, chr-F: 0.188\ntestset: URL, BLEU: 5.4, chr-F: 0.211\ntestset: URL, BLEU: 1.7, chr-F: 0.197\ntestset: URL, BLEU: 6.6, chr-F: 0.186\ntestset: URL, BLEU: 5.3, chr-F: 0.072\ntestset: URL, BLEU: 0.9, chr-F: 0.131\ntestset: URL, BLEU: 52.7, chr-F: 0.699\ntestset: URL, BLEU: 0.8, chr-F: 0.133\ntestset: URL, BLEU: 43.5, chr-F: 0.621\ntestset: URL, BLEU: 6.9, chr-F: 0.245\ntestset: URL, BLEU: 0.8, chr-F: 0.200\ntestset: URL, BLEU: 15.1, chr-F: 0.367\ntestset: URL, BLEU: 2.2, chr-F: 0.279\ntestset: URL, BLEU: 1.0, chr-F: 0.176\ntestset: URL, BLEU: 0.6, chr-F: 0.208\ntestset: URL, BLEU: 12.1, chr-F: 0.274\ntestset: URL, BLEU: 18.8, chr-F: 0.446\ntestset: URL, BLEU: 48.6, chr-F: 0.669\ntestset: URL, BLEU: 4.6, chr-F: 0.198\ntestset: URL, BLEU: 12.0, chr-F: 0.340\ntestset: URL, BLEU: 3.2, chr-F: 0.240\ntestset: URL, BLEU: 0.5, chr-F: 0.179\ntestset: URL, BLEU: 1.7, chr-F: 0.160\ntestset: URL, BLEU: 55.8, chr-F: 0.730\ntestset: URL, BLEU: 5.7, chr-F: 0.157\ntestset: URL, BLEU: 36.7, chr-F: 0.584\ntestset: URL, BLEU: 2.0, chr-F: 0.272\ntestset: URL, BLEU: 6.1, chr-F: 0.246\ntestset: URL, BLEU: 15.3, chr-F: 0.378\ntestset: URL, BLEU: 1.2, chr-F: 0.242\ntestset: URL, BLEU: 0.9, chr-F: 0.164\ntestset: URL, BLEU: 0.9, chr-F: 0.170\ntestset: URL, BLEU: 13.7, chr-F: 0.263\ntestset: URL, BLEU: 17.1, chr-F: 0.410\ntestset: URL, BLEU: 49.6, chr-F: 0.673\ntestset: URL, BLEU: 5.1, chr-F: 0.218\ntestset: URL, BLEU: 34.8, chr-F: 0.587\ntestset: URL, BLEU: 2.1, chr-F: 0.322\ntestset: URL, BLEU: 1.7, chr-F: 0.192\ntestset: URL, BLEU: 1.7, chr-F: 0.173\ntestset: URL, BLEU: 13.4, chr-F: 0.397\ntestset: URL, BLEU: 0.7, chr-F: 0.063\ntestset: URL, BLEU: 41.5, chr-F: 0.514\ntestset: URL, BLEU: 21.3, chr-F: 0.483\ntestset: URL, BLEU: 0.0, chr-F: 0.058\ntestset: URL, BLEU: 10.7, chr-F: 0.354\ntestset: URL, BLEU: 7.0, chr-F: 0.161\ntestset: URL, BLEU: 18.6, chr-F: 0.316\ntestset: URL, BLEU: 38.3, chr-F: 0.524\ntestset: URL, BLEU: 0.7, chr-F: 0.128\ntestset: URL, BLEU: 4.1, chr-F: 0.219\ntestset: URL, BLEU: 14.1, chr-F: 0.186\ntestset: URL, BLEU: 3.1, chr-F: 0.129\ntestset: URL, BLEU: 3.6, chr-F: 0.226\ntestset: URL, BLEU: 12.4, chr-F: 0.145\ntestset: URL, BLEU: 9.8, chr-F: 0.209\ntestset: URL, BLEU: 2.8, chr-F: 0.142\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 30.1, chr-F: 0.535\ntestset: URL, BLEU: 28.0, chr-F: 0.486\ntestset: URL, BLEU: 16.0, chr-F: 0.262\ntestset: URL, BLEU: 5.5, chr-F: 0.160\ntestset: URL, BLEU: 1.6, chr-F: 0.307\ntestset: URL, BLEU: 30.4, chr-F: 0.438\ntestset: URL, BLEU: 8.1, chr-F: 0.083\ntestset: URL, BLEU: 41.4, chr-F: 0.616\ntestset: URL, BLEU: 1.6, chr-F: 0.217\ntestset: URL, BLEU: 1.6, chr-F: 0.159\ntestset: URL, BLEU: 6.3, chr-F: 0.318\ntestset: URL, BLEU: 6.2, chr-F: 0.058\ntestset: URL, BLEU: 11.7, chr-F: 0.363\ntestset: URL, BLEU: 14.9, chr-F: 0.322\ntestset: URL, BLEU: 9.1, chr-F: 0.398\ntestset: URL, BLEU: 3.3, chr-F: 0.117\ntestset: URL, BLEU: 13.1, chr-F: 0.387\ntestset: URL, BLEU: 3.1, chr-F: 0.154\ntestset: URL, BLEU: 2.4, chr-F: 0.206\ntestset: URL, BLEU: 13.9, chr-F: 0.395\ntestset: URL, BLEU: 2.1, chr-F: 0.209\ntestset: URL, BLEU: 1.7, chr-F: 0.147\ntestset: URL, BLEU: 10.5, chr-F: 0.350\ntestset: URL, BLEU: 10.7, chr-F: 0.299\ntestset: URL, BLEU: 12.0, chr-F: 0.373\ntestset: URL, BLEU: 3.2, chr-F: 0.225\ntestset: URL, BLEU: 13.4, chr-F: 0.308\ntestset: URL, BLEU: 37.4, chr-F: 0.525\ntestset: URL, BLEU: 2.8, chr-F: 0.036\ntestset: URL, BLEU: 40.3, chr-F: 0.596\ntestset: URL, BLEU: 31.7, chr-F: 0.490\ntestset: URL, BLEU: 36.3, chr-F: 0.658\ntestset: URL, BLEU: 2.9, chr-F: 0.209\ntestset: URL, BLEU: 38.8, chr-F: 0.530\ntestset: URL, BLEU: 5.8, chr-F: 0.165\ntestset: URL, BLEU: 1.0, chr-F: 0.159\ntestset: URL, BLEU: 36.4, chr-F: 0.568\ntestset: URL, BLEU: 35.0, chr-F: 0.573\ntestset: URL, BLEU: 29.6, chr-F: 0.495\ntestset: URL, BLEU: 3.7, chr-F: 0.194\ntestset: URL, BLEU: 6.6, chr-F: 0.133\ntestset: URL, BLEU: 4.2, chr-F: 0.087\ntestset: URL, BLEU: 2.0, chr-F: 0.243\ntestset: URL, BLEU: 41.4, chr-F: 0.618\ntestset: URL, BLEU: 0.6, chr-F: 0.178\ntestset: URL, BLEU: 8.3, chr-F: 0.238\ntestset: URL, BLEU: 59.4, chr-F: 0.759\ntestset: URL, BLEU: 49.9, chr-F: 0.685\ntestset: URL, BLEU: 54.1, chr-F: 0.699\ntestset: URL, BLEU: 5.0, chr-F: 0.250\ntestset: URL, BLEU: 2.4, chr-F: 0.224\ntestset: URL, BLEU: 19.4, chr-F: 0.446\ntestset: URL, BLEU: 2.5, chr-F: 0.273\ntestset: URL, BLEU: 13.8, chr-F: 0.292\ntestset: URL, BLEU: 21.3, chr-F: 0.457\ntestset: URL, BLEU: 14.7, chr-F: 0.423\ntestset: URL, BLEU: 1.9, chr-F: 0.257\ntestset: URL, BLEU: 4.2, chr-F: 0.162\ntestset: URL, BLEU: 2.6, chr-F: 0.186\ntestset: URL, BLEU: 39.7, chr-F: 0.529\ntestset: URL, BLEU: 25.0, chr-F: 0.427\ntestset: URL, BLEU: 28.4, chr-F: 0.428\ntestset: URL, BLEU: 41.8, chr-F: 0.595\ntestset: URL, BLEU: 36.4, chr-F: 0.565\ntestset: URL, BLEU: 7.7, chr-F: 0.328\ntestset: URL, BLEU: 21.1, chr-F: 0.428\ntestset: URL, BLEU: 2.0, chr-F: 0.118\ntestset: URL, BLEU: 6.3, chr-F: 0.255\ntestset: URL, BLEU: 1.4, chr-F: 0.244\ntestset: URL, BLEU: 4.4, chr-F: 0.204\ntestset: URL, BLEU: 10.7, chr-F: 0.371\ntestset: URL, BLEU: 1.4, chr-F: 0.105\ntestset: URL, BLEU: 9.5, chr-F: 0.343\ntestset: URL, BLEU: 15.1, chr-F: 0.306\ntestset: URL, BLEU: 0.7, chr-F: 0.196\ntestset: URL, BLEU: 11.6, chr-F: 0.308\ntestset: URL, BLEU: 0.9, chr-F: 0.186\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 0.6, chr-F: 0.079\ntestset: URL, BLEU: 16.7, chr-F: 0.372\ntestset: URL, BLEU: 15.8, chr-F: 0.344\ntestset: URL, BLEU: 1.3, chr-F: 0.166\ntestset: URL, BLEU: 5.6, chr-F: 0.157\ntestset: URL, BLEU: 2.2, chr-F: 0.160\ntestset: URL, BLEU: 2.1, chr-F: 0.238\ntestset: URL, BLEU: 14.4, chr-F: 0.365\ntestset: URL, BLEU: 20.9, chr-F: 0.397\ntestset: URL, BLEU: 3.7, chr-F: 0.165\ntestset: URL, BLEU: 1.8, chr-F: 0.156",
"### System Info:\n\n\n* hf\\_name: gmw-gmw\n* source\\_languages: gmw\n* target\\_languages: gmw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']\n* src\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* tgt\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmw\n* tgt\\_alpha3: gmw\n* short\\_pair: gmw-gmw\n* chrF2\\_score: 0.568\n* bleu: 36.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 72534.0\n* src\\_name: West Germanic languages\n* tgt\\_name: West Germanic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: gmw\n* tgt\\_alpha2: gmw\n* prefer\\_old: False\n* long\\_pair: gmw-gmw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
65,
4305,
619
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #en #lb #af #de #fy #yi #gmw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### gmw-gmw\n\n\n* source group: West Germanic languages\n* target group: West Germanic languages\n* OPUS readme: gmw-gmw\n* model: transformer\n* source language(s): afr ang\\_Latn deu eng enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* target language(s): afr ang\\_Latn deu eng enm\\_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.527\ntestset: URL, BLEU: 19.0, chr-F: 0.502\ntestset: URL, BLEU: 23.7, chr-F: 0.515\ntestset: URL, BLEU: 19.2, chr-F: 0.491\ntestset: URL, BLEU: 23.1, chr-F: 0.514\ntestset: URL, BLEU: 18.6, chr-F: 0.495\ntestset: URL, BLEU: 25.8, chr-F: 0.545\ntestset: URL, BLEU: 20.3, chr-F: 0.505\ntestset: URL, BLEU: 23.7, chr-F: 0.523\ntestset: URL, BLEU: 18.9, chr-F: 0.490\ntestset: URL, BLEU: 24.4, chr-F: 0.529\ntestset: URL, BLEU: 19.2, chr-F: 0.489\ntestset: URL, BLEU: 27.2, chr-F: 0.545\ntestset: URL, BLEU: 22.4, chr-F: 0.514\ntestset: URL, BLEU: 27.0, chr-F: 0.546\ntestset: URL, BLEU: 28.4, chr-F: 0.552\ntestset: URL, BLEU: 25.3, chr-F: 0.541\ntestset: URL, BLEU: 33.2, chr-F: 0.595\ntestset: URL, BLEU: 29.8, chr-F: 0.578\ntestset: URL, BLEU: 29.0, chr-F: 0.557\ntestset: URL, BLEU: 23.9, chr-F: 0.534\ntestset: URL, BLEU: 35.9, chr-F: 0.607\ntestset: URL, BLEU: 34.8, chr-F: 0.609\ntestset: URL, BLEU: 32.1, chr-F: 0.579\ntestset: URL, BLEU: 31.0, chr-F: 0.579\ntestset: URL, BLEU: 0.0, chr-F: 0.065\ntestset: URL, BLEU: 46.8, chr-F: 0.668\ntestset: URL, BLEU: 58.5, chr-F: 0.728\ntestset: URL, BLEU: 13.4, chr-F: 0.357\ntestset: URL, BLEU: 5.3, chr-F: 0.026\ntestset: URL, BLEU: 3.5, chr-F: 0.228\ntestset: URL, BLEU: 1.6, chr-F: 0.131\ntestset: URL, BLEU: 55.4, chr-F: 0.715\ntestset: URL, BLEU: 3.4, chr-F: 0.008\ntestset: URL, BLEU: 3.1, chr-F: 0.096\ntestset: URL, BLEU: 2.6, chr-F: 0.188\ntestset: URL, BLEU: 5.4, chr-F: 0.211\ntestset: URL, BLEU: 1.7, chr-F: 0.197\ntestset: URL, BLEU: 6.6, chr-F: 0.186\ntestset: URL, BLEU: 5.3, chr-F: 0.072\ntestset: URL, BLEU: 0.9, chr-F: 0.131\ntestset: URL, BLEU: 52.7, chr-F: 0.699\ntestset: URL, BLEU: 0.8, chr-F: 0.133\ntestset: URL, BLEU: 43.5, chr-F: 0.621\ntestset: URL, BLEU: 6.9, chr-F: 0.245\ntestset: URL, BLEU: 0.8, chr-F: 0.200\ntestset: URL, BLEU: 15.1, chr-F: 0.367\ntestset: URL, BLEU: 2.2, chr-F: 0.279\ntestset: URL, BLEU: 1.0, chr-F: 0.176\ntestset: URL, BLEU: 0.6, chr-F: 0.208\ntestset: URL, BLEU: 12.1, chr-F: 0.274\ntestset: URL, BLEU: 18.8, chr-F: 0.446\ntestset: URL, BLEU: 48.6, chr-F: 0.669\ntestset: URL, BLEU: 4.6, chr-F: 0.198\ntestset: URL, BLEU: 12.0, chr-F: 0.340\ntestset: URL, BLEU: 3.2, chr-F: 0.240\ntestset: URL, BLEU: 0.5, chr-F: 0.179\ntestset: URL, BLEU: 1.7, chr-F: 0.160\ntestset: URL, BLEU: 55.8, chr-F: 0.730\ntestset: URL, BLEU: 5.7, chr-F: 0.157\ntestset: URL, BLEU: 36.7, chr-F: 0.584\ntestset: URL, BLEU: 2.0, chr-F: 0.272\ntestset: URL, BLEU: 6.1, chr-F: 0.246\ntestset: URL, BLEU: 15.3, chr-F: 0.378\ntestset: URL, BLEU: 1.2, chr-F: 0.242\ntestset: URL, BLEU: 0.9, chr-F: 0.164\ntestset: URL, BLEU: 0.9, chr-F: 0.170\ntestset: URL, BLEU: 13.7, chr-F: 0.263\ntestset: URL, BLEU: 17.1, chr-F: 0.410\ntestset: URL, BLEU: 49.6, chr-F: 0.673\ntestset: URL, BLEU: 5.1, chr-F: 0.218\ntestset: URL, BLEU: 34.8, chr-F: 0.587\ntestset: URL, BLEU: 2.1, chr-F: 0.322\ntestset: URL, BLEU: 1.7, chr-F: 0.192\ntestset: URL, BLEU: 1.7, chr-F: 0.173\ntestset: URL, BLEU: 13.4, chr-F: 0.397\ntestset: URL, BLEU: 0.7, chr-F: 0.063\ntestset: URL, BLEU: 41.5, chr-F: 0.514\ntestset: URL, BLEU: 21.3, chr-F: 0.483\ntestset: URL, BLEU: 0.0, chr-F: 0.058\ntestset: URL, BLEU: 10.7, chr-F: 0.354\ntestset: URL, BLEU: 7.0, chr-F: 0.161\ntestset: URL, BLEU: 18.6, chr-F: 0.316\ntestset: URL, BLEU: 38.3, chr-F: 0.524\ntestset: URL, BLEU: 0.7, chr-F: 0.128\ntestset: URL, BLEU: 4.1, chr-F: 0.219\ntestset: URL, BLEU: 14.1, chr-F: 0.186\ntestset: URL, BLEU: 3.1, chr-F: 0.129\ntestset: URL, BLEU: 3.6, chr-F: 0.226\ntestset: URL, BLEU: 12.4, chr-F: 0.145\ntestset: URL, BLEU: 9.8, chr-F: 0.209\ntestset: URL, BLEU: 2.8, chr-F: 0.142\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 30.1, chr-F: 0.535\ntestset: URL, BLEU: 28.0, chr-F: 0.486\ntestset: URL, BLEU: 16.0, chr-F: 0.262\ntestset: URL, BLEU: 5.5, chr-F: 0.160\ntestset: URL, BLEU: 1.6, chr-F: 0.307\ntestset: URL, BLEU: 30.4, chr-F: 0.438\ntestset: URL, BLEU: 8.1, chr-F: 0.083\ntestset: URL, BLEU: 41.4, chr-F: 0.616\ntestset: URL, BLEU: 1.6, chr-F: 0.217\ntestset: URL, BLEU: 1.6, chr-F: 0.159\ntestset: URL, BLEU: 6.3, chr-F: 0.318\ntestset: URL, BLEU: 6.2, chr-F: 0.058\ntestset: URL, BLEU: 11.7, chr-F: 0.363\ntestset: URL, BLEU: 14.9, chr-F: 0.322\ntestset: URL, BLEU: 9.1, chr-F: 0.398\ntestset: URL, BLEU: 3.3, chr-F: 0.117\ntestset: URL, BLEU: 13.1, chr-F: 0.387\ntestset: URL, BLEU: 3.1, chr-F: 0.154\ntestset: URL, BLEU: 2.4, chr-F: 0.206\ntestset: URL, BLEU: 13.9, chr-F: 0.395\ntestset: URL, BLEU: 2.1, chr-F: 0.209\ntestset: URL, BLEU: 1.7, chr-F: 0.147\ntestset: URL, BLEU: 10.5, chr-F: 0.350\ntestset: URL, BLEU: 10.7, chr-F: 0.299\ntestset: URL, BLEU: 12.0, chr-F: 0.373\ntestset: URL, BLEU: 3.2, chr-F: 0.225\ntestset: URL, BLEU: 13.4, chr-F: 0.308\ntestset: URL, BLEU: 37.4, chr-F: 0.525\ntestset: URL, BLEU: 2.8, chr-F: 0.036\ntestset: URL, BLEU: 40.3, chr-F: 0.596\ntestset: URL, BLEU: 31.7, chr-F: 0.490\ntestset: URL, BLEU: 36.3, chr-F: 0.658\ntestset: URL, BLEU: 2.9, chr-F: 0.209\ntestset: URL, BLEU: 38.8, chr-F: 0.530\ntestset: URL, BLEU: 5.8, chr-F: 0.165\ntestset: URL, BLEU: 1.0, chr-F: 0.159\ntestset: URL, BLEU: 36.4, chr-F: 0.568\ntestset: URL, BLEU: 35.0, chr-F: 0.573\ntestset: URL, BLEU: 29.6, chr-F: 0.495\ntestset: URL, BLEU: 3.7, chr-F: 0.194\ntestset: URL, BLEU: 6.6, chr-F: 0.133\ntestset: URL, BLEU: 4.2, chr-F: 0.087\ntestset: URL, BLEU: 2.0, chr-F: 0.243\ntestset: URL, BLEU: 41.4, chr-F: 0.618\ntestset: URL, BLEU: 0.6, chr-F: 0.178\ntestset: URL, BLEU: 8.3, chr-F: 0.238\ntestset: URL, BLEU: 59.4, chr-F: 0.759\ntestset: URL, BLEU: 49.9, chr-F: 0.685\ntestset: URL, BLEU: 54.1, chr-F: 0.699\ntestset: URL, BLEU: 5.0, chr-F: 0.250\ntestset: URL, BLEU: 2.4, chr-F: 0.224\ntestset: URL, BLEU: 19.4, chr-F: 0.446\ntestset: URL, BLEU: 2.5, chr-F: 0.273\ntestset: URL, BLEU: 13.8, chr-F: 0.292\ntestset: URL, BLEU: 21.3, chr-F: 0.457\ntestset: URL, BLEU: 14.7, chr-F: 0.423\ntestset: URL, BLEU: 1.9, chr-F: 0.257\ntestset: URL, BLEU: 4.2, chr-F: 0.162\ntestset: URL, BLEU: 2.6, chr-F: 0.186\ntestset: URL, BLEU: 39.7, chr-F: 0.529\ntestset: URL, BLEU: 25.0, chr-F: 0.427\ntestset: URL, BLEU: 28.4, chr-F: 0.428\ntestset: URL, BLEU: 41.8, chr-F: 0.595\ntestset: URL, BLEU: 36.4, chr-F: 0.565\ntestset: URL, BLEU: 7.7, chr-F: 0.328\ntestset: URL, BLEU: 21.1, chr-F: 0.428\ntestset: URL, BLEU: 2.0, chr-F: 0.118\ntestset: URL, BLEU: 6.3, chr-F: 0.255\ntestset: URL, BLEU: 1.4, chr-F: 0.244\ntestset: URL, BLEU: 4.4, chr-F: 0.204\ntestset: URL, BLEU: 10.7, chr-F: 0.371\ntestset: URL, BLEU: 1.4, chr-F: 0.105\ntestset: URL, BLEU: 9.5, chr-F: 0.343\ntestset: URL, BLEU: 15.1, chr-F: 0.306\ntestset: URL, BLEU: 0.7, chr-F: 0.196\ntestset: URL, BLEU: 11.6, chr-F: 0.308\ntestset: URL, BLEU: 0.9, chr-F: 0.186\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 0.6, chr-F: 0.079\ntestset: URL, BLEU: 16.7, chr-F: 0.372\ntestset: URL, BLEU: 15.8, chr-F: 0.344\ntestset: URL, BLEU: 1.3, chr-F: 0.166\ntestset: URL, BLEU: 5.6, chr-F: 0.157\ntestset: URL, BLEU: 2.2, chr-F: 0.160\ntestset: URL, BLEU: 2.1, chr-F: 0.238\ntestset: URL, BLEU: 14.4, chr-F: 0.365\ntestset: URL, BLEU: 20.9, chr-F: 0.397\ntestset: URL, BLEU: 3.7, chr-F: 0.165\ntestset: URL, BLEU: 1.8, chr-F: 0.156### System Info:\n\n\n* hf\\_name: gmw-gmw\n* source\\_languages: gmw\n* target\\_languages: gmw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'en', 'lb', 'af', 'de', 'fy', 'yi', 'gmw']\n* src\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* tgt\\_constituents: {'ksh', 'nld', 'eng', 'enm\\_Latn', 'ltz', 'stq', 'afr', 'pdc', 'deu', 'gos', 'ang\\_Latn', 'fry', 'gsw', 'frr', 'nds', 'yid', 'swg', 'sco'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: gmw\n* tgt\\_alpha3: gmw\n* short\\_pair: gmw-gmw\n* chrF2\\_score: 0.568\n* bleu: 36.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 72534.0\n* src\\_name: West Germanic languages\n* tgt\\_name: West Germanic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: gmw\n* tgt\\_alpha2: gmw\n* prefer\\_old: False\n* long\\_pair: gmw-gmw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### grk-eng
* source group: Greek languages
* target group: English
* OPUS readme: [grk-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/grk-eng/README.md)
* model: transformer
* source language(s): ell grc_Grek
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ell-eng.ell.eng | 65.9 | 0.779 |
| Tatoeba-test.grc-eng.grc.eng | 4.1 | 0.187 |
| Tatoeba-test.multi.eng | 60.9 | 0.733 |
### System Info:
- hf_name: grk-eng
- source_languages: grk
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/grk-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['el', 'grk', 'en']
- src_constituents: {'grc_Grek', 'ell'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/grk-eng/opus2m-2020-08-01.test.txt
- src_alpha3: grk
- tgt_alpha3: eng
- short_pair: grk-en
- chrF2_score: 0.733
- bleu: 60.9
- brevity_penalty: 0.973
- ref_len: 62205.0
- src_name: Greek languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: grk
- tgt_alpha2: en
- prefer_old: False
- long_pair: grk-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["el", "grk", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-grk-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"el",
"grk",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"el",
"grk",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #el #grk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### grk-eng
* source group: Greek languages
* target group: English
* OPUS readme: grk-eng
* model: transformer
* source language(s): ell grc\_Grek
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 65.9, chr-F: 0.779
testset: URL, BLEU: 4.1, chr-F: 0.187
testset: URL, BLEU: 60.9, chr-F: 0.733
### System Info:
* hf\_name: grk-eng
* source\_languages: grk
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['el', 'grk', 'en']
* src\_constituents: {'grc\_Grek', 'ell'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: grk
* tgt\_alpha3: eng
* short\_pair: grk-en
* chrF2\_score: 0.733
* bleu: 60.9
* brevity\_penalty: 0.973
* ref\_len: 62205.0
* src\_name: Greek languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: grk
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: grk-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### grk-eng\n\n\n* source group: Greek languages\n* target group: English\n* OPUS readme: grk-eng\n* model: transformer\n* source language(s): ell grc\\_Grek\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 65.9, chr-F: 0.779\ntestset: URL, BLEU: 4.1, chr-F: 0.187\ntestset: URL, BLEU: 60.9, chr-F: 0.733",
"### System Info:\n\n\n* hf\\_name: grk-eng\n* source\\_languages: grk\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['el', 'grk', 'en']\n* src\\_constituents: {'grc\\_Grek', 'ell'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: grk\n* tgt\\_alpha3: eng\n* short\\_pair: grk-en\n* chrF2\\_score: 0.733\n* bleu: 60.9\n* brevity\\_penalty: 0.973\n* ref\\_len: 62205.0\n* src\\_name: Greek languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: grk\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: grk-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #grk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### grk-eng\n\n\n* source group: Greek languages\n* target group: English\n* OPUS readme: grk-eng\n* model: transformer\n* source language(s): ell grc\\_Grek\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 65.9, chr-F: 0.779\ntestset: URL, BLEU: 4.1, chr-F: 0.187\ntestset: URL, BLEU: 60.9, chr-F: 0.733",
"### System Info:\n\n\n* hf\\_name: grk-eng\n* source\\_languages: grk\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['el', 'grk', 'en']\n* src\\_constituents: {'grc\\_Grek', 'ell'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: grk\n* tgt\\_alpha3: eng\n* short\\_pair: grk-en\n* chrF2\\_score: 0.733\n* bleu: 60.9\n* brevity\\_penalty: 0.973\n* ref\\_len: 62205.0\n* src\\_name: Greek languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: grk\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: grk-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
54,
182,
414
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #el #grk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### grk-eng\n\n\n* source group: Greek languages\n* target group: English\n* OPUS readme: grk-eng\n* model: transformer\n* source language(s): ell grc\\_Grek\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 65.9, chr-F: 0.779\ntestset: URL, BLEU: 4.1, chr-F: 0.187\ntestset: URL, BLEU: 60.9, chr-F: 0.733### System Info:\n\n\n* hf\\_name: grk-eng\n* source\\_languages: grk\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['el', 'grk', 'en']\n* src\\_constituents: {'grc\\_Grek', 'ell'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: grk\n* tgt\\_alpha3: eng\n* short\\_pair: grk-en\n* chrF2\\_score: 0.733\n* bleu: 60.9\n* brevity\\_penalty: 0.973\n* ref\\_len: 62205.0\n* src\\_name: Greek languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: grk\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: grk-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-guw-de
* source languages: guw
* target languages: de
* OPUS readme: [guw-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.de | 22.7 | 0.434 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-guw-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"guw",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #guw #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-guw-de
* source languages: guw
* target languages: de
* OPUS readme: guw-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.7, chr-F: 0.434
| [
"### opus-mt-guw-de\n\n\n* source languages: guw\n* target languages: de\n* OPUS readme: guw-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.434"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-guw-de\n\n\n* source languages: guw\n* target languages: de\n* OPUS readme: guw-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.434"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-guw-de\n\n\n* source languages: guw\n* target languages: de\n* OPUS readme: guw-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.434"
] |
translation | transformers |
### opus-mt-guw-en
* source languages: guw
* target languages: en
* OPUS readme: [guw-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.en | 44.8 | 0.601 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-guw-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"guw",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #guw #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-guw-en
* source languages: guw
* target languages: en
* OPUS readme: guw-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 44.8, chr-F: 0.601
| [
"### opus-mt-guw-en\n\n\n* source languages: guw\n* target languages: en\n* OPUS readme: guw-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.8, chr-F: 0.601"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-guw-en\n\n\n* source languages: guw\n* target languages: en\n* OPUS readme: guw-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.8, chr-F: 0.601"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-guw-en\n\n\n* source languages: guw\n* target languages: en\n* OPUS readme: guw-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.8, chr-F: 0.601"
] |
translation | transformers |
### opus-mt-guw-es
* source languages: guw
* target languages: es
* OPUS readme: [guw-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.es | 27.2 | 0.457 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-guw-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"guw",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #guw #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-guw-es
* source languages: guw
* target languages: es
* OPUS readme: guw-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.2, chr-F: 0.457
| [
"### opus-mt-guw-es\n\n\n* source languages: guw\n* target languages: es\n* OPUS readme: guw-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.457"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-guw-es\n\n\n* source languages: guw\n* target languages: es\n* OPUS readme: guw-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.457"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-guw-es\n\n\n* source languages: guw\n* target languages: es\n* OPUS readme: guw-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.457"
] |
translation | transformers |
### opus-mt-guw-fi
* source languages: guw
* target languages: fi
* OPUS readme: [guw-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.fi | 27.7 | 0.512 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-guw-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"guw",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #guw #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-guw-fi
* source languages: guw
* target languages: fi
* OPUS readme: guw-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.7, chr-F: 0.512
| [
"### opus-mt-guw-fi\n\n\n* source languages: guw\n* target languages: fi\n* OPUS readme: guw-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.512"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-guw-fi\n\n\n* source languages: guw\n* target languages: fi\n* OPUS readme: guw-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.512"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-guw-fi\n\n\n* source languages: guw\n* target languages: fi\n* OPUS readme: guw-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.512"
] |
translation | transformers |
### opus-mt-guw-fr
* source languages: guw
* target languages: fr
* OPUS readme: [guw-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.fr | 29.7 | 0.479 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-guw-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"guw",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #guw #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-guw-fr
* source languages: guw
* target languages: fr
* OPUS readme: guw-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.7, chr-F: 0.479
| [
"### opus-mt-guw-fr\n\n\n* source languages: guw\n* target languages: fr\n* OPUS readme: guw-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.479"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-guw-fr\n\n\n* source languages: guw\n* target languages: fr\n* OPUS readme: guw-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.479"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-guw-fr\n\n\n* source languages: guw\n* target languages: fr\n* OPUS readme: guw-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.479"
] |
translation | transformers |
### opus-mt-guw-sv
* source languages: guw
* target languages: sv
* OPUS readme: [guw-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.sv | 31.2 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-guw-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"guw",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #guw #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-guw-sv
* source languages: guw
* target languages: sv
* OPUS readme: guw-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.2, chr-F: 0.498
| [
"### opus-mt-guw-sv\n\n\n* source languages: guw\n* target languages: sv\n* OPUS readme: guw-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.2, chr-F: 0.498"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-guw-sv\n\n\n* source languages: guw\n* target languages: sv\n* OPUS readme: guw-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.2, chr-F: 0.498"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #guw #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-guw-sv\n\n\n* source languages: guw\n* target languages: sv\n* OPUS readme: guw-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.2, chr-F: 0.498"
] |
translation | transformers |
### opus-mt-gv-en
* source languages: gv
* target languages: en
* OPUS readme: [gv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/gv-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/gv-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/gv-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/gv-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.gv.en | 38.9 | 0.668 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-gv-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"gv",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #gv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-gv-en
* source languages: gv
* target languages: en
* OPUS readme: gv-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.9, chr-F: 0.668
| [
"### opus-mt-gv-en\n\n\n* source languages: gv\n* target languages: en\n* OPUS readme: gv-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.9, chr-F: 0.668"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-gv-en\n\n\n* source languages: gv\n* target languages: en\n* OPUS readme: gv-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.9, chr-F: 0.668"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #gv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-gv-en\n\n\n* source languages: gv\n* target languages: en\n* OPUS readme: gv-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.9, chr-F: 0.668"
] |
translation | transformers |
### opus-mt-ha-en
* source languages: ha
* target languages: en
* OPUS readme: [ha-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.en | 35.0 | 0.506 |
| Tatoeba.ha.en | 39.0 | 0.497 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ha-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ha",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ha #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ha-en
* source languages: ha
* target languages: en
* OPUS readme: ha-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.0, chr-F: 0.506
testset: URL, BLEU: 39.0, chr-F: 0.497
| [
"### opus-mt-ha-en\n\n\n* source languages: ha\n* target languages: en\n* OPUS readme: ha-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.0, chr-F: 0.506\ntestset: URL, BLEU: 39.0, chr-F: 0.497"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ha-en\n\n\n* source languages: ha\n* target languages: en\n* OPUS readme: ha-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.0, chr-F: 0.506\ntestset: URL, BLEU: 39.0, chr-F: 0.497"
] | [
51,
129
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ha-en\n\n\n* source languages: ha\n* target languages: en\n* OPUS readme: ha-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.0, chr-F: 0.506\ntestset: URL, BLEU: 39.0, chr-F: 0.497"
] |
translation | transformers |
### opus-mt-ha-es
* source languages: ha
* target languages: es
* OPUS readme: [ha-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.es | 21.8 | 0.394 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ha-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ha",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ha #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ha-es
* source languages: ha
* target languages: es
* OPUS readme: ha-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.8, chr-F: 0.394
| [
"### opus-mt-ha-es\n\n\n* source languages: ha\n* target languages: es\n* OPUS readme: ha-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.394"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ha-es\n\n\n* source languages: ha\n* target languages: es\n* OPUS readme: ha-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.394"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ha-es\n\n\n* source languages: ha\n* target languages: es\n* OPUS readme: ha-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.394"
] |
translation | transformers |
### opus-mt-ha-fi
* source languages: ha
* target languages: fi
* OPUS readme: [ha-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.fi | 21.9 | 0.435 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ha-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ha",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ha #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ha-fi
* source languages: ha
* target languages: fi
* OPUS readme: ha-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.9, chr-F: 0.435
| [
"### opus-mt-ha-fi\n\n\n* source languages: ha\n* target languages: fi\n* OPUS readme: ha-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.9, chr-F: 0.435"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ha-fi\n\n\n* source languages: ha\n* target languages: fi\n* OPUS readme: ha-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.9, chr-F: 0.435"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ha-fi\n\n\n* source languages: ha\n* target languages: fi\n* OPUS readme: ha-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.9, chr-F: 0.435"
] |
translation | transformers |
### opus-mt-ha-fr
* source languages: ha
* target languages: fr
* OPUS readme: [ha-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.fr | 24.3 | 0.415 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ha-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ha",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ha #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ha-fr
* source languages: ha
* target languages: fr
* OPUS readme: ha-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.3, chr-F: 0.415
| [
"### opus-mt-ha-fr\n\n\n* source languages: ha\n* target languages: fr\n* OPUS readme: ha-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.3, chr-F: 0.415"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ha-fr\n\n\n* source languages: ha\n* target languages: fr\n* OPUS readme: ha-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.3, chr-F: 0.415"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ha-fr\n\n\n* source languages: ha\n* target languages: fr\n* OPUS readme: ha-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.3, chr-F: 0.415"
] |
translation | transformers |
### opus-mt-ha-sv
* source languages: ha
* target languages: sv
* OPUS readme: [ha-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ha-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ha-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ha-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ha.sv | 25.8 | 0.438 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ha-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ha",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ha #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ha-sv
* source languages: ha
* target languages: sv
* OPUS readme: ha-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.8, chr-F: 0.438
| [
"### opus-mt-ha-sv\n\n\n* source languages: ha\n* target languages: sv\n* OPUS readme: ha-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.8, chr-F: 0.438"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ha-sv\n\n\n* source languages: ha\n* target languages: sv\n* OPUS readme: ha-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.8, chr-F: 0.438"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ha #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ha-sv\n\n\n* source languages: ha\n* target languages: sv\n* OPUS readme: ha-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.8, chr-F: 0.438"
] |
translation | transformers |
### heb-ara
* source group: Hebrew
* target group: Arabic
* OPUS readme: [heb-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ara/README.md)
* model: transformer
* source language(s): heb
* target language(s): apc apc_Latn ara arq arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ara | 23.6 | 0.532 |
### System Info:
- hf_name: heb-ara
- source_languages: heb
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'ar']
- src_constituents: {'heb'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ara/opus-2020-07-03.test.txt
- src_alpha3: heb
- tgt_alpha3: ara
- short_pair: he-ar
- chrF2_score: 0.532
- bleu: 23.6
- brevity_penalty: 0.9259999999999999
- ref_len: 6372.0
- src_name: Hebrew
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: he
- tgt_alpha2: ar
- prefer_old: False
- long_pair: heb-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["he", "ar"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-he-ar | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"he",
"ar"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### heb-ara
* source group: Hebrew
* target group: Arabic
* OPUS readme: heb-ara
* model: transformer
* source language(s): heb
* target language(s): apc apc\_Latn ara arq arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.6, chr-F: 0.532
### System Info:
* hf\_name: heb-ara
* source\_languages: heb
* target\_languages: ara
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['he', 'ar']
* src\_constituents: {'heb'}
* tgt\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: heb
* tgt\_alpha3: ara
* short\_pair: he-ar
* chrF2\_score: 0.532
* bleu: 23.6
* brevity\_penalty: 0.9259999999999999
* ref\_len: 6372.0
* src\_name: Hebrew
* tgt\_name: Arabic
* train\_date: 2020-07-03
* src\_alpha2: he
* tgt\_alpha2: ar
* prefer\_old: False
* long\_pair: heb-ara
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### heb-ara\n\n\n* source group: Hebrew\n* target group: Arabic\n* OPUS readme: heb-ara\n* model: transformer\n* source language(s): heb\n* target language(s): apc apc\\_Latn ara arq arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.532",
"### System Info:\n\n\n* hf\\_name: heb-ara\n* source\\_languages: heb\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'ar']\n* src\\_constituents: {'heb'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ara\n* short\\_pair: he-ar\n* chrF2\\_score: 0.532\n* bleu: 23.6\n* brevity\\_penalty: 0.9259999999999999\n* ref\\_len: 6372.0\n* src\\_name: Hebrew\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: he\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: heb-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### heb-ara\n\n\n* source group: Hebrew\n* target group: Arabic\n* OPUS readme: heb-ara\n* model: transformer\n* source language(s): heb\n* target language(s): apc apc\\_Latn ara arq arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.532",
"### System Info:\n\n\n* hf\\_name: heb-ara\n* source\\_languages: heb\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'ar']\n* src\\_constituents: {'heb'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ara\n* short\\_pair: he-ar\n* chrF2\\_score: 0.532\n* bleu: 23.6\n* brevity\\_penalty: 0.9259999999999999\n* ref\\_len: 6372.0\n* src\\_name: Hebrew\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: he\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: heb-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
170,
457
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### heb-ara\n\n\n* source group: Hebrew\n* target group: Arabic\n* OPUS readme: heb-ara\n* model: transformer\n* source language(s): heb\n* target language(s): apc apc\\_Latn ara arq arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.532### System Info:\n\n\n* hf\\_name: heb-ara\n* source\\_languages: heb\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'ar']\n* src\\_constituents: {'heb'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ara\n* short\\_pair: he-ar\n* chrF2\\_score: 0.532\n* bleu: 23.6\n* brevity\\_penalty: 0.9259999999999999\n* ref\\_len: 6372.0\n* src\\_name: Hebrew\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: he\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: heb-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-he-de
* source languages: he
* target languages: de
* OPUS readme: [he-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-de/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-de/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-de/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.he.de | 45.5 | 0.647 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-he-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-he-de
* source languages: he
* target languages: de
* OPUS readme: he-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 45.5, chr-F: 0.647
| [
"### opus-mt-he-de\n\n\n* source languages: he\n* target languages: de\n* OPUS readme: he-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.5, chr-F: 0.647"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-he-de\n\n\n* source languages: he\n* target languages: de\n* OPUS readme: he-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.5, chr-F: 0.647"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-he-de\n\n\n* source languages: he\n* target languages: de\n* OPUS readme: he-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.5, chr-F: 0.647"
] |
translation | transformers |
### heb-epo
* source group: Hebrew
* target group: Esperanto
* OPUS readme: [heb-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-epo/README.md)
* model: transformer-align
* source language(s): heb
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.epo | 17.6 | 0.348 |
### System Info:
- hf_name: heb-epo
- source_languages: heb
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'eo']
- src_constituents: {'heb'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-epo/opus-2020-06-16.test.txt
- src_alpha3: heb
- tgt_alpha3: epo
- short_pair: he-eo
- chrF2_score: 0.348
- bleu: 17.6
- brevity_penalty: 0.899
- ref_len: 78217.0
- src_name: Hebrew
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: he
- tgt_alpha2: eo
- prefer_old: False
- long_pair: heb-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["he", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-he-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"he",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### heb-epo
* source group: Hebrew
* target group: Esperanto
* OPUS readme: heb-epo
* model: transformer-align
* source language(s): heb
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 17.6, chr-F: 0.348
### System Info:
* hf\_name: heb-epo
* source\_languages: heb
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['he', 'eo']
* src\_constituents: {'heb'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: heb
* tgt\_alpha3: epo
* short\_pair: he-eo
* chrF2\_score: 0.348
* bleu: 17.6
* brevity\_penalty: 0.899
* ref\_len: 78217.0
* src\_name: Hebrew
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: he
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: heb-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### heb-epo\n\n\n* source group: Hebrew\n* target group: Esperanto\n* OPUS readme: heb-epo\n* model: transformer-align\n* source language(s): heb\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.6, chr-F: 0.348",
"### System Info:\n\n\n* hf\\_name: heb-epo\n* source\\_languages: heb\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'eo']\n* src\\_constituents: {'heb'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: epo\n* short\\_pair: he-eo\n* chrF2\\_score: 0.348\n* bleu: 17.6\n* brevity\\_penalty: 0.899\n* ref\\_len: 78217.0\n* src\\_name: Hebrew\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: he\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: heb-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### heb-epo\n\n\n* source group: Hebrew\n* target group: Esperanto\n* OPUS readme: heb-epo\n* model: transformer-align\n* source language(s): heb\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.6, chr-F: 0.348",
"### System Info:\n\n\n* hf\\_name: heb-epo\n* source\\_languages: heb\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'eo']\n* src\\_constituents: {'heb'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: epo\n* short\\_pair: he-eo\n* chrF2\\_score: 0.348\n* bleu: 17.6\n* brevity\\_penalty: 0.899\n* ref\\_len: 78217.0\n* src\\_name: Hebrew\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: he\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: heb-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
139,
407
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### heb-epo\n\n\n* source group: Hebrew\n* target group: Esperanto\n* OPUS readme: heb-epo\n* model: transformer-align\n* source language(s): heb\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.6, chr-F: 0.348### System Info:\n\n\n* hf\\_name: heb-epo\n* source\\_languages: heb\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'eo']\n* src\\_constituents: {'heb'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: epo\n* short\\_pair: he-eo\n* chrF2\\_score: 0.348\n* bleu: 17.6\n* brevity\\_penalty: 0.899\n* ref\\_len: 78217.0\n* src\\_name: Hebrew\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: he\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: heb-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers | ### he-es
* source group: Hebrew
* target group: Spanish
* OPUS readme: [heb-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-spa/README.md)
* model: transformer
* source language(s): heb
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.spa | 51.3 | 0.689 |
### System Info:
- hf_name: he-es
- source_languages: heb
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'es']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Spanish', {'spa'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-spa
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-spa/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: spa
- chrF2_score: 0.6890000000000001
- bleu: 51.3
- brevity_penalty: 0.97
- ref_len: 14213.0
- src_name: Hebrew
- tgt_name: Spanish
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: es
- prefer_old: False
- short_pair: he-es
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-09:15 | {"language": ["he", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-he-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"he",
"es"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### he-es
* source group: Hebrew
* target group: Spanish
* OPUS readme: heb-spa
* model: transformer
* source language(s): heb
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 51.3, chr-F: 0.689
### System Info:
* hf\_name: he-es
* source\_languages: heb
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['he', 'es']
* src\_constituents: ('Hebrew', {'heb'})
* tgt\_constituents: ('Spanish', {'spa'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: heb-spa
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: heb
* tgt\_alpha3: spa
* chrF2\_score: 0.6890000000000001
* bleu: 51.3
* brevity\_penalty: 0.97
* ref\_len: 14213.0
* src\_name: Hebrew
* tgt\_name: Spanish
* train\_date: 2020-12-10 00:00:00
* src\_alpha2: he
* tgt\_alpha2: es
* prefer\_old: False
* short\_pair: he-es
* helsinki\_git\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
* transformers\_git\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
* port\_machine: URL
* port\_time: 2020-12-11-09:15
| [
"### he-es\n\n\n* source group: Hebrew\n* target group: Spanish\n* OPUS readme: heb-spa\n* model: transformer\n* source language(s): heb\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.3, chr-F: 0.689",
"### System Info:\n\n\n* hf\\_name: he-es\n* source\\_languages: heb\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'es']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Spanish', {'spa'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-spa\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: spa\n* chrF2\\_score: 0.6890000000000001\n* bleu: 51.3\n* brevity\\_penalty: 0.97\n* ref\\_len: 14213.0\n* src\\_name: Hebrew\n* tgt\\_name: Spanish\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* short\\_pair: he-es\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-09:15"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### he-es\n\n\n* source group: Hebrew\n* target group: Spanish\n* OPUS readme: heb-spa\n* model: transformer\n* source language(s): heb\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.3, chr-F: 0.689",
"### System Info:\n\n\n* hf\\_name: he-es\n* source\\_languages: heb\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'es']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Spanish', {'spa'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-spa\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: spa\n* chrF2\\_score: 0.6890000000000001\n* bleu: 51.3\n* brevity\\_penalty: 0.97\n* ref\\_len: 14213.0\n* src\\_name: Hebrew\n* tgt\\_name: Spanish\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* short\\_pair: he-es\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-09:15"
] | [
51,
129,
419
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### he-es\n\n\n* source group: Hebrew\n* target group: Spanish\n* OPUS readme: heb-spa\n* model: transformer\n* source language(s): heb\n* target language(s): spa\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.3, chr-F: 0.689### System Info:\n\n\n* hf\\_name: he-es\n* source\\_languages: heb\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'es']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Spanish', {'spa'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-spa\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: spa\n* chrF2\\_score: 0.6890000000000001\n* bleu: 51.3\n* brevity\\_penalty: 0.97\n* ref\\_len: 14213.0\n* src\\_name: Hebrew\n* tgt\\_name: Spanish\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* short\\_pair: he-es\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-09:15"
] |
translation | transformers |
### opus-mt-he-fi
* source languages: he
* target languages: fi
* OPUS readme: [he-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.he.fi | 23.3 | 0.492 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-he-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-he-fi
* source languages: he
* target languages: fi
* OPUS readme: he-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.3, chr-F: 0.492
| [
"### opus-mt-he-fi\n\n\n* source languages: he\n* target languages: fi\n* OPUS readme: he-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.492"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-he-fi\n\n\n* source languages: he\n* target languages: fi\n* OPUS readme: he-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.492"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-he-fi\n\n\n* source languages: he\n* target languages: fi\n* OPUS readme: he-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.492"
] |
translation | transformers | ### he-it
* source group: Hebrew
* target group: Italian
* OPUS readme: [heb-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md)
* model: transformer
* source language(s): heb
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ita | 41.1 | 0.643 |
### System Info:
- hf_name: he-it
- source_languages: heb
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'it']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Italian', {'ita'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-ita
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: ita
- chrF2_score: 0.643
- bleu: 41.1
- brevity_penalty: 0.997
- ref_len: 11464.0
- src_name: Hebrew
- tgt_name: Italian
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: it
- prefer_old: False
- short_pair: he-it
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-11:50 | {"language": ["he", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-he-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"he",
"it"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### he-it
* source group: Hebrew
* target group: Italian
* OPUS readme: heb-ita
* model: transformer
* source language(s): heb
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 41.1, chr-F: 0.643
### System Info:
* hf\_name: he-it
* source\_languages: heb
* target\_languages: ita
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['he', 'it']
* src\_constituents: ('Hebrew', {'heb'})
* tgt\_constituents: ('Italian', {'ita'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: heb-ita
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: heb
* tgt\_alpha3: ita
* chrF2\_score: 0.643
* bleu: 41.1
* brevity\_penalty: 0.997
* ref\_len: 11464.0
* src\_name: Hebrew
* tgt\_name: Italian
* train\_date: 2020-12-10 00:00:00
* src\_alpha2: he
* tgt\_alpha2: it
* prefer\_old: False
* short\_pair: he-it
* helsinki\_git\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
* transformers\_git\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
* port\_machine: URL
* port\_time: 2020-12-11-11:50
| [
"### he-it\n\n\n* source group: Hebrew\n* target group: Italian\n* OPUS readme: heb-ita\n* model: transformer\n* source language(s): heb\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.1, chr-F: 0.643",
"### System Info:\n\n\n* hf\\_name: he-it\n* source\\_languages: heb\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'it']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Italian', {'ita'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-ita\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ita\n* chrF2\\_score: 0.643\n* bleu: 41.1\n* brevity\\_penalty: 0.997\n* ref\\_len: 11464.0\n* src\\_name: Hebrew\n* tgt\\_name: Italian\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* short\\_pair: he-it\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-11:50"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### he-it\n\n\n* source group: Hebrew\n* target group: Italian\n* OPUS readme: heb-ita\n* model: transformer\n* source language(s): heb\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.1, chr-F: 0.643",
"### System Info:\n\n\n* hf\\_name: he-it\n* source\\_languages: heb\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'it']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Italian', {'ita'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-ita\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ita\n* chrF2\\_score: 0.643\n* bleu: 41.1\n* brevity\\_penalty: 0.997\n* ref\\_len: 11464.0\n* src\\_name: Hebrew\n* tgt\\_name: Italian\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* short\\_pair: he-it\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-11:50"
] | [
51,
131,
418
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### he-it\n\n\n* source group: Hebrew\n* target group: Italian\n* OPUS readme: heb-ita\n* model: transformer\n* source language(s): heb\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.1, chr-F: 0.643### System Info:\n\n\n* hf\\_name: he-it\n* source\\_languages: heb\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'it']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Italian', {'ita'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-ita\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ita\n* chrF2\\_score: 0.643\n* bleu: 41.1\n* brevity\\_penalty: 0.997\n* ref\\_len: 11464.0\n* src\\_name: Hebrew\n* tgt\\_name: Italian\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* short\\_pair: he-it\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-11:50"
] |
translation | transformers |
### he-ru
* source group: Hebrew
* target group: Russian
* OPUS readme: [heb-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-rus/README.md)
* model: transformer
* source language(s): heb
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.zip)
* test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.test.txt)
* test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.rus | 40.5 | 0.599 |
### System Info:
- hf_name: he-ru
- source_languages: heb
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'ru']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Russian', {'rus'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-rus
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-rus/opus-2020-10-04.test.txt
- src_alpha3: heb
- tgt_alpha3: rus
- chrF2_score: 0.599
- bleu: 40.5
- brevity_penalty: 0.963
- ref_len: 16583.0
- src_name: Hebrew
- tgt_name: Russian
- train_date: 2020-10-04 00:00:00
- src_alpha2: he
- tgt_alpha2: ru
- prefer_old: False
- short_pair: he-ru
- helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
- transformers_git_sha: b0a907615aca0d728a9bc90f16caef0848f6a435
- port_machine: LM0-400-22516.local
- port_time: 2020-10-26-16:16 | {"language": ["he", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-he-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"he",
"ru"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### he-ru
* source group: Hebrew
* target group: Russian
* OPUS readme: heb-rus
* model: transformer
* source language(s): heb
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.5, chr-F: 0.599
### System Info:
* hf\_name: he-ru
* source\_languages: heb
* target\_languages: rus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['he', 'ru']
* src\_constituents: ('Hebrew', {'heb'})
* tgt\_constituents: ('Russian', {'rus'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: heb-rus
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: heb
* tgt\_alpha3: rus
* chrF2\_score: 0.599
* bleu: 40.5
* brevity\_penalty: 0.963
* ref\_len: 16583.0
* src\_name: Hebrew
* tgt\_name: Russian
* train\_date: 2020-10-04 00:00:00
* src\_alpha2: he
* tgt\_alpha2: ru
* prefer\_old: False
* short\_pair: he-ru
* helsinki\_git\_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561
* transformers\_git\_sha: b0a907615aca0d728a9bc90f16caef0848f6a435
* port\_machine: URL
* port\_time: 2020-10-26-16:16
| [
"### he-ru\n\n\n* source group: Hebrew\n* target group: Russian\n* OPUS readme: heb-rus\n* model: transformer\n* source language(s): heb\n* target language(s): rus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.5, chr-F: 0.599",
"### System Info:\n\n\n* hf\\_name: he-ru\n* source\\_languages: heb\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'ru']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Russian', {'rus'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-rus\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: rus\n* chrF2\\_score: 0.599\n* bleu: 40.5\n* brevity\\_penalty: 0.963\n* ref\\_len: 16583.0\n* src\\_name: Hebrew\n* tgt\\_name: Russian\n* train\\_date: 2020-10-04 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* short\\_pair: he-ru\n* helsinki\\_git\\_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561\n* transformers\\_git\\_sha: b0a907615aca0d728a9bc90f16caef0848f6a435\n* port\\_machine: URL\n* port\\_time: 2020-10-26-16:16"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### he-ru\n\n\n* source group: Hebrew\n* target group: Russian\n* OPUS readme: heb-rus\n* model: transformer\n* source language(s): heb\n* target language(s): rus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.5, chr-F: 0.599",
"### System Info:\n\n\n* hf\\_name: he-ru\n* source\\_languages: heb\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'ru']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Russian', {'rus'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-rus\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: rus\n* chrF2\\_score: 0.599\n* bleu: 40.5\n* brevity\\_penalty: 0.963\n* ref\\_len: 16583.0\n* src\\_name: Hebrew\n* tgt\\_name: Russian\n* train\\_date: 2020-10-04 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* short\\_pair: he-ru\n* helsinki\\_git\\_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561\n* transformers\\_git\\_sha: b0a907615aca0d728a9bc90f16caef0848f6a435\n* port\\_machine: URL\n* port\\_time: 2020-10-26-16:16"
] | [
51,
129,
412
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### he-ru\n\n\n* source group: Hebrew\n* target group: Russian\n* OPUS readme: heb-rus\n* model: transformer\n* source language(s): heb\n* target language(s): rus\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.5, chr-F: 0.599### System Info:\n\n\n* hf\\_name: he-ru\n* source\\_languages: heb\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'ru']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Russian', {'rus'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-rus\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: rus\n* chrF2\\_score: 0.599\n* bleu: 40.5\n* brevity\\_penalty: 0.963\n* ref\\_len: 16583.0\n* src\\_name: Hebrew\n* tgt\\_name: Russian\n* train\\_date: 2020-10-04 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* short\\_pair: he-ru\n* helsinki\\_git\\_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561\n* transformers\\_git\\_sha: b0a907615aca0d728a9bc90f16caef0848f6a435\n* port\\_machine: URL\n* port\\_time: 2020-10-26-16:16"
] |
translation | transformers |
### opus-mt-he-sv
* source languages: he
* target languages: sv
* OPUS readme: [he-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/he-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/he-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.he.sv | 28.9 | 0.493 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-he-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-he-sv
* source languages: he
* target languages: sv
* OPUS readme: he-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.9, chr-F: 0.493
| [
"### opus-mt-he-sv\n\n\n* source languages: he\n* target languages: sv\n* OPUS readme: he-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.493"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-he-sv\n\n\n* source languages: he\n* target languages: sv\n* OPUS readme: he-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.493"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-he-sv\n\n\n* source languages: he\n* target languages: sv\n* OPUS readme: he-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.493"
] |
translation | transformers |
### heb-ukr
* source group: Hebrew
* target group: Ukrainian
* OPUS readme: [heb-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ukr/README.md)
* model: transformer-align
* source language(s): heb
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ukr | 35.4 | 0.552 |
### System Info:
- hf_name: heb-ukr
- source_languages: heb
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'uk']
- src_constituents: {'heb'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ukr/opus-2020-06-17.test.txt
- src_alpha3: heb
- tgt_alpha3: ukr
- short_pair: he-uk
- chrF2_score: 0.552
- bleu: 35.4
- brevity_penalty: 0.971
- ref_len: 5163.0
- src_name: Hebrew
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: he
- tgt_alpha2: uk
- prefer_old: False
- long_pair: heb-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["he", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-he-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"he",
"uk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### heb-ukr
* source group: Hebrew
* target group: Ukrainian
* OPUS readme: heb-ukr
* model: transformer-align
* source language(s): heb
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.4, chr-F: 0.552
### System Info:
* hf\_name: heb-ukr
* source\_languages: heb
* target\_languages: ukr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['he', 'uk']
* src\_constituents: {'heb'}
* tgt\_constituents: {'ukr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: heb
* tgt\_alpha3: ukr
* short\_pair: he-uk
* chrF2\_score: 0.552
* bleu: 35.4
* brevity\_penalty: 0.971
* ref\_len: 5163.0
* src\_name: Hebrew
* tgt\_name: Ukrainian
* train\_date: 2020-06-17
* src\_alpha2: he
* tgt\_alpha2: uk
* prefer\_old: False
* long\_pair: heb-ukr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### heb-ukr\n\n\n* source group: Hebrew\n* target group: Ukrainian\n* OPUS readme: heb-ukr\n* model: transformer-align\n* source language(s): heb\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.4, chr-F: 0.552",
"### System Info:\n\n\n* hf\\_name: heb-ukr\n* source\\_languages: heb\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'uk']\n* src\\_constituents: {'heb'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ukr\n* short\\_pair: he-uk\n* chrF2\\_score: 0.552\n* bleu: 35.4\n* brevity\\_penalty: 0.971\n* ref\\_len: 5163.0\n* src\\_name: Hebrew\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: he\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: heb-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### heb-ukr\n\n\n* source group: Hebrew\n* target group: Ukrainian\n* OPUS readme: heb-ukr\n* model: transformer-align\n* source language(s): heb\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.4, chr-F: 0.552",
"### System Info:\n\n\n* hf\\_name: heb-ukr\n* source\\_languages: heb\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'uk']\n* src\\_constituents: {'heb'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ukr\n* short\\_pair: he-uk\n* chrF2\\_score: 0.552\n* bleu: 35.4\n* brevity\\_penalty: 0.971\n* ref\\_len: 5163.0\n* src\\_name: Hebrew\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: he\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: heb-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
137,
402
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### heb-ukr\n\n\n* source group: Hebrew\n* target group: Ukrainian\n* OPUS readme: heb-ukr\n* model: transformer-align\n* source language(s): heb\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.4, chr-F: 0.552### System Info:\n\n\n* hf\\_name: heb-ukr\n* source\\_languages: heb\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'uk']\n* src\\_constituents: {'heb'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ukr\n* short\\_pair: he-uk\n* chrF2\\_score: 0.552\n* bleu: 35.4\n* brevity\\_penalty: 0.971\n* ref\\_len: 5163.0\n* src\\_name: Hebrew\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: he\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: heb-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-hi-en
* source languages: hi
* target languages: en
* OPUS readme: [hi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/hi-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hi-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hi-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014.hi.en | 9.1 | 0.357 |
| newstest2014-hien.hi.en | 13.6 | 0.409 |
| Tatoeba.hi.en | 40.4 | 0.580 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hi-en | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"hi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #hi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hi-en
* source languages: hi
* target languages: en
* OPUS readme: hi-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 9.1, chr-F: 0.357
testset: URL, BLEU: 13.6, chr-F: 0.409
testset: URL, BLEU: 40.4, chr-F: 0.580
| [
"### opus-mt-hi-en\n\n\n* source languages: hi\n* target languages: en\n* OPUS readme: hi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 9.1, chr-F: 0.357\ntestset: URL, BLEU: 13.6, chr-F: 0.409\ntestset: URL, BLEU: 40.4, chr-F: 0.580"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #hi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hi-en\n\n\n* source languages: hi\n* target languages: en\n* OPUS readme: hi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 9.1, chr-F: 0.357\ntestset: URL, BLEU: 13.6, chr-F: 0.409\ntestset: URL, BLEU: 40.4, chr-F: 0.580"
] | [
53,
150
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #hi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hi-en\n\n\n* source languages: hi\n* target languages: en\n* OPUS readme: hi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 9.1, chr-F: 0.357\ntestset: URL, BLEU: 13.6, chr-F: 0.409\ntestset: URL, BLEU: 40.4, chr-F: 0.580"
] |
translation | transformers |
### hin-urd
* source group: Hindi
* target group: Urdu
* OPUS readme: [hin-urd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hin-urd/README.md)
* model: transformer-align
* source language(s): hin
* target language(s): urd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hin.urd | 12.4 | 0.393 |
### System Info:
- hf_name: hin-urd
- source_languages: hin
- target_languages: urd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hin-urd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hi', 'ur']
- src_constituents: {'hin'}
- tgt_constituents: {'urd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hin-urd/opus-2020-06-16.test.txt
- src_alpha3: hin
- tgt_alpha3: urd
- short_pair: hi-ur
- chrF2_score: 0.39299999999999996
- bleu: 12.4
- brevity_penalty: 1.0
- ref_len: 1618.0
- src_name: Hindi
- tgt_name: Urdu
- train_date: 2020-06-16
- src_alpha2: hi
- tgt_alpha2: ur
- prefer_old: False
- long_pair: hin-urd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["hi", "ur"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hi-ur | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hi",
"ur",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi",
"ur"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hi #ur #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### hin-urd
* source group: Hindi
* target group: Urdu
* OPUS readme: hin-urd
* model: transformer-align
* source language(s): hin
* target language(s): urd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 12.4, chr-F: 0.393
### System Info:
* hf\_name: hin-urd
* source\_languages: hin
* target\_languages: urd
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['hi', 'ur']
* src\_constituents: {'hin'}
* tgt\_constituents: {'urd'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: hin
* tgt\_alpha3: urd
* short\_pair: hi-ur
* chrF2\_score: 0.39299999999999996
* bleu: 12.4
* brevity\_penalty: 1.0
* ref\_len: 1618.0
* src\_name: Hindi
* tgt\_name: Urdu
* train\_date: 2020-06-16
* src\_alpha2: hi
* tgt\_alpha2: ur
* prefer\_old: False
* long\_pair: hin-urd
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### hin-urd\n\n\n* source group: Hindi\n* target group: Urdu\n* OPUS readme: hin-urd\n* model: transformer-align\n* source language(s): hin\n* target language(s): urd\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.4, chr-F: 0.393",
"### System Info:\n\n\n* hf\\_name: hin-urd\n* source\\_languages: hin\n* target\\_languages: urd\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hi', 'ur']\n* src\\_constituents: {'hin'}\n* tgt\\_constituents: {'urd'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hin\n* tgt\\_alpha3: urd\n* short\\_pair: hi-ur\n* chrF2\\_score: 0.39299999999999996\n* bleu: 12.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 1618.0\n* src\\_name: Hindi\n* tgt\\_name: Urdu\n* train\\_date: 2020-06-16\n* src\\_alpha2: hi\n* tgt\\_alpha2: ur\n* prefer\\_old: False\n* long\\_pair: hin-urd\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hi #ur #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### hin-urd\n\n\n* source group: Hindi\n* target group: Urdu\n* OPUS readme: hin-urd\n* model: transformer-align\n* source language(s): hin\n* target language(s): urd\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.4, chr-F: 0.393",
"### System Info:\n\n\n* hf\\_name: hin-urd\n* source\\_languages: hin\n* target\\_languages: urd\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hi', 'ur']\n* src\\_constituents: {'hin'}\n* tgt\\_constituents: {'urd'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hin\n* tgt\\_alpha3: urd\n* short\\_pair: hi-ur\n* chrF2\\_score: 0.39299999999999996\n* bleu: 12.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 1618.0\n* src\\_name: Hindi\n* tgt\\_name: Urdu\n* train\\_date: 2020-06-16\n* src\\_alpha2: hi\n* tgt\\_alpha2: ur\n* prefer\\_old: False\n* long\\_pair: hin-urd\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
137,
412
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hi #ur #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### hin-urd\n\n\n* source group: Hindi\n* target group: Urdu\n* OPUS readme: hin-urd\n* model: transformer-align\n* source language(s): hin\n* target language(s): urd\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.4, chr-F: 0.393### System Info:\n\n\n* hf\\_name: hin-urd\n* source\\_languages: hin\n* target\\_languages: urd\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hi', 'ur']\n* src\\_constituents: {'hin'}\n* tgt\\_constituents: {'urd'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hin\n* tgt\\_alpha3: urd\n* short\\_pair: hi-ur\n* chrF2\\_score: 0.39299999999999996\n* bleu: 12.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 1618.0\n* src\\_name: Hindi\n* tgt\\_name: Urdu\n* train\\_date: 2020-06-16\n* src\\_alpha2: hi\n* tgt\\_alpha2: ur\n* prefer\\_old: False\n* long\\_pair: hin-urd\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-hil-de
* source languages: hil
* target languages: de
* OPUS readme: [hil-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hil-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/hil-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hil.de | 26.4 | 0.479 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hil-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hil",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hil #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hil-de
* source languages: hil
* target languages: de
* OPUS readme: hil-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.4, chr-F: 0.479
| [
"### opus-mt-hil-de\n\n\n* source languages: hil\n* target languages: de\n* OPUS readme: hil-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.479"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hil #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hil-de\n\n\n* source languages: hil\n* target languages: de\n* OPUS readme: hil-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.479"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hil #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hil-de\n\n\n* source languages: hil\n* target languages: de\n* OPUS readme: hil-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.479"
] |
translation | transformers |
### opus-mt-hil-en
* source languages: hil
* target languages: en
* OPUS readme: [hil-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hil-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hil.en | 49.2 | 0.638 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hil-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hil",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hil #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hil-en
* source languages: hil
* target languages: en
* OPUS readme: hil-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.2, chr-F: 0.638
| [
"### opus-mt-hil-en\n\n\n* source languages: hil\n* target languages: en\n* OPUS readme: hil-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.2, chr-F: 0.638"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hil #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hil-en\n\n\n* source languages: hil\n* target languages: en\n* OPUS readme: hil-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.2, chr-F: 0.638"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hil #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hil-en\n\n\n* source languages: hil\n* target languages: en\n* OPUS readme: hil-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.2, chr-F: 0.638"
] |
translation | transformers |
### opus-mt-hil-fi
* source languages: hil
* target languages: fi
* OPUS readme: [hil-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hil-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/hil-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hil-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hil.fi | 29.9 | 0.547 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hil-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hil",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hil #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hil-fi
* source languages: hil
* target languages: fi
* OPUS readme: hil-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.9, chr-F: 0.547
| [
"### opus-mt-hil-fi\n\n\n* source languages: hil\n* target languages: fi\n* OPUS readme: hil-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.9, chr-F: 0.547"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hil #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hil-fi\n\n\n* source languages: hil\n* target languages: fi\n* OPUS readme: hil-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.9, chr-F: 0.547"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hil #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hil-fi\n\n\n* source languages: hil\n* target languages: fi\n* OPUS readme: hil-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.9, chr-F: 0.547"
] |
translation | transformers |
### opus-mt-ho-en
* source languages: ho
* target languages: en
* OPUS readme: [ho-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ho-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ho-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ho-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ho-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ho.en | 26.8 | 0.428 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ho-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ho",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ho #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ho-en
* source languages: ho
* target languages: en
* OPUS readme: ho-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.8, chr-F: 0.428
| [
"### opus-mt-ho-en\n\n\n* source languages: ho\n* target languages: en\n* OPUS readme: ho-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.8, chr-F: 0.428"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ho #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ho-en\n\n\n* source languages: ho\n* target languages: en\n* OPUS readme: ho-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.8, chr-F: 0.428"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ho #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ho-en\n\n\n* source languages: ho\n* target languages: en\n* OPUS readme: ho-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.8, chr-F: 0.428"
] |
translation | transformers |
### opus-mt-hr-es
* source languages: hr
* target languages: es
* OPUS readme: [hr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.es | 27.9 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hr-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hr",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hr #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hr-es
* source languages: hr
* target languages: es
* OPUS readme: hr-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.9, chr-F: 0.498
| [
"### opus-mt-hr-es\n\n\n* source languages: hr\n* target languages: es\n* OPUS readme: hr-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.498"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hr-es\n\n\n* source languages: hr\n* target languages: es\n* OPUS readme: hr-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.498"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hr-es\n\n\n* source languages: hr\n* target languages: es\n* OPUS readme: hr-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.498"
] |
translation | transformers |
### opus-mt-hr-fi
* source languages: hr
* target languages: fi
* OPUS readme: [hr-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.fi | 25.0 | 0.519 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hr-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hr",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hr #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hr-fi
* source languages: hr
* target languages: fi
* OPUS readme: hr-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.0, chr-F: 0.519
| [
"### opus-mt-hr-fi\n\n\n* source languages: hr\n* target languages: fi\n* OPUS readme: hr-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.519"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hr-fi\n\n\n* source languages: hr\n* target languages: fi\n* OPUS readme: hr-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.519"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hr-fi\n\n\n* source languages: hr\n* target languages: fi\n* OPUS readme: hr-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.519"
] |
translation | transformers |
### opus-mt-hr-fr
* source languages: hr
* target languages: fr
* OPUS readme: [hr-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.fr | 26.1 | 0.482 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hr-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hr",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hr #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hr-fr
* source languages: hr
* target languages: fr
* OPUS readme: hr-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.1, chr-F: 0.482
| [
"### opus-mt-hr-fr\n\n\n* source languages: hr\n* target languages: fr\n* OPUS readme: hr-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.482"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hr-fr\n\n\n* source languages: hr\n* target languages: fr\n* OPUS readme: hr-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.482"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hr-fr\n\n\n* source languages: hr\n* target languages: fr\n* OPUS readme: hr-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.482"
] |
translation | transformers |
### opus-mt-hr-sv
* source languages: hr
* target languages: sv
* OPUS readme: [hr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hr-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hr-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hr-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.hr.sv | 30.5 | 0.526 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hr-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hr",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hr #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hr-sv
* source languages: hr
* target languages: sv
* OPUS readme: hr-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.5, chr-F: 0.526
| [
"### opus-mt-hr-sv\n\n\n* source languages: hr\n* target languages: sv\n* OPUS readme: hr-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.5, chr-F: 0.526"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hr-sv\n\n\n* source languages: hr\n* target languages: sv\n* OPUS readme: hr-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.5, chr-F: 0.526"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hr-sv\n\n\n* source languages: hr\n* target languages: sv\n* OPUS readme: hr-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.5, chr-F: 0.526"
] |
translation | transformers |
### opus-mt-ht-en
* source languages: ht
* target languages: en
* OPUS readme: [ht-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.en | 37.5 | 0.542 |
| Tatoeba.ht.en | 57.0 | 0.689 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ht-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ht",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ht #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ht-en
* source languages: ht
* target languages: en
* OPUS readme: ht-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.5, chr-F: 0.542
testset: URL, BLEU: 57.0, chr-F: 0.689
| [
"### opus-mt-ht-en\n\n\n* source languages: ht\n* target languages: en\n* OPUS readme: ht-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.5, chr-F: 0.542\ntestset: URL, BLEU: 57.0, chr-F: 0.689"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ht-en\n\n\n* source languages: ht\n* target languages: en\n* OPUS readme: ht-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.5, chr-F: 0.542\ntestset: URL, BLEU: 57.0, chr-F: 0.689"
] | [
52,
132
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ht-en\n\n\n* source languages: ht\n* target languages: en\n* OPUS readme: ht-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.5, chr-F: 0.542\ntestset: URL, BLEU: 57.0, chr-F: 0.689"
] |
translation | transformers |
### opus-mt-ht-es
* source languages: ht
* target languages: es
* OPUS readme: [ht-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.es | 23.7 | 0.418 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ht-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ht",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ht #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ht-es
* source languages: ht
* target languages: es
* OPUS readme: ht-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.7, chr-F: 0.418
| [
"### opus-mt-ht-es\n\n\n* source languages: ht\n* target languages: es\n* OPUS readme: ht-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.418"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ht-es\n\n\n* source languages: ht\n* target languages: es\n* OPUS readme: ht-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.418"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ht-es\n\n\n* source languages: ht\n* target languages: es\n* OPUS readme: ht-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.418"
] |
translation | transformers |
### opus-mt-ht-fi
* source languages: ht
* target languages: fi
* OPUS readme: [ht-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.fi | 23.3 | 0.464 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ht-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ht",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ht #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ht-fi
* source languages: ht
* target languages: fi
* OPUS readme: ht-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.3, chr-F: 0.464
| [
"### opus-mt-ht-fi\n\n\n* source languages: ht\n* target languages: fi\n* OPUS readme: ht-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.464"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ht-fi\n\n\n* source languages: ht\n* target languages: fi\n* OPUS readme: ht-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.464"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ht-fi\n\n\n* source languages: ht\n* target languages: fi\n* OPUS readme: ht-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.464"
] |
translation | transformers |
### opus-mt-ht-fr
* source languages: ht
* target languages: fr
* OPUS readme: [ht-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.fr | 28.4 | 0.469 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ht-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ht",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ht #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ht-fr
* source languages: ht
* target languages: fr
* OPUS readme: ht-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.4, chr-F: 0.469
| [
"### opus-mt-ht-fr\n\n\n* source languages: ht\n* target languages: fr\n* OPUS readme: ht-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.469"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ht-fr\n\n\n* source languages: ht\n* target languages: fr\n* OPUS readme: ht-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.469"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ht-fr\n\n\n* source languages: ht\n* target languages: fr\n* OPUS readme: ht-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.469"
] |
translation | transformers |
### opus-mt-ht-sv
* source languages: ht
* target languages: sv
* OPUS readme: [ht-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ht-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ht-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ht-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ht.sv | 27.9 | 0.463 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ht-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ht",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ht #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ht-sv
* source languages: ht
* target languages: sv
* OPUS readme: ht-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.9, chr-F: 0.463
| [
"### opus-mt-ht-sv\n\n\n* source languages: ht\n* target languages: sv\n* OPUS readme: ht-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.463"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ht-sv\n\n\n* source languages: ht\n* target languages: sv\n* OPUS readme: ht-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.463"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ht #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ht-sv\n\n\n* source languages: ht\n* target languages: sv\n* OPUS readme: ht-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.463"
] |
translation | transformers |
### opus-mt-hu-de
* source languages: hu
* target languages: de
* OPUS readme: [hu-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.de | 44.1 | 0.637 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hu-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hu",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hu #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hu-de
* source languages: hu
* target languages: de
* OPUS readme: hu-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 44.1, chr-F: 0.637
| [
"### opus-mt-hu-de\n\n\n* source languages: hu\n* target languages: de\n* OPUS readme: hu-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.1, chr-F: 0.637"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hu-de\n\n\n* source languages: hu\n* target languages: de\n* OPUS readme: hu-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.1, chr-F: 0.637"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hu-de\n\n\n* source languages: hu\n* target languages: de\n* OPUS readme: hu-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.1, chr-F: 0.637"
] |
translation | transformers |
### opus-mt-hu-en
* source languages: hu
* target languages: en
* OPUS readme: [hu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.en | 52.9 | 0.683 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hu-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hu",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hu #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hu-en
* source languages: hu
* target languages: en
* OPUS readme: hu-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 52.9, chr-F: 0.683
| [
"### opus-mt-hu-en\n\n\n* source languages: hu\n* target languages: en\n* OPUS readme: hu-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.9, chr-F: 0.683"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hu-en\n\n\n* source languages: hu\n* target languages: en\n* OPUS readme: hu-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.9, chr-F: 0.683"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hu-en\n\n\n* source languages: hu\n* target languages: en\n* OPUS readme: hu-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.9, chr-F: 0.683"
] |
translation | transformers |
### hun-epo
* source group: Hungarian
* target group: Esperanto
* OPUS readme: [hun-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hun-epo/README.md)
* model: transformer-align
* source language(s): hun
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hun.epo | 17.9 | 0.378 |
### System Info:
- hf_name: hun-epo
- source_languages: hun
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hun-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hu', 'eo']
- src_constituents: {'hun'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hun-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hun-epo/opus-2020-06-16.test.txt
- src_alpha3: hun
- tgt_alpha3: epo
- short_pair: hu-eo
- chrF2_score: 0.37799999999999995
- bleu: 17.9
- brevity_penalty: 0.934
- ref_len: 76005.0
- src_name: Hungarian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: hu
- tgt_alpha2: eo
- prefer_old: False
- long_pair: hun-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["hu", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hu-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hu",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hu",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hu #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### hun-epo
* source group: Hungarian
* target group: Esperanto
* OPUS readme: hun-epo
* model: transformer-align
* source language(s): hun
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 17.9, chr-F: 0.378
### System Info:
* hf\_name: hun-epo
* source\_languages: hun
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['hu', 'eo']
* src\_constituents: {'hun'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: hun
* tgt\_alpha3: epo
* short\_pair: hu-eo
* chrF2\_score: 0.37799999999999995
* bleu: 17.9
* brevity\_penalty: 0.934
* ref\_len: 76005.0
* src\_name: Hungarian
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: hu
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: hun-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### hun-epo\n\n\n* source group: Hungarian\n* target group: Esperanto\n* OPUS readme: hun-epo\n* model: transformer-align\n* source language(s): hun\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.9, chr-F: 0.378",
"### System Info:\n\n\n* hf\\_name: hun-epo\n* source\\_languages: hun\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hu', 'eo']\n* src\\_constituents: {'hun'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hun\n* tgt\\_alpha3: epo\n* short\\_pair: hu-eo\n* chrF2\\_score: 0.37799999999999995\n* bleu: 17.9\n* brevity\\_penalty: 0.934\n* ref\\_len: 76005.0\n* src\\_name: Hungarian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: hu\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: hun-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### hun-epo\n\n\n* source group: Hungarian\n* target group: Esperanto\n* OPUS readme: hun-epo\n* model: transformer-align\n* source language(s): hun\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.9, chr-F: 0.378",
"### System Info:\n\n\n* hf\\_name: hun-epo\n* source\\_languages: hun\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hu', 'eo']\n* src\\_constituents: {'hun'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hun\n* tgt\\_alpha3: epo\n* short\\_pair: hu-eo\n* chrF2\\_score: 0.37799999999999995\n* bleu: 17.9\n* brevity\\_penalty: 0.934\n* ref\\_len: 76005.0\n* src\\_name: Hungarian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: hu\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: hun-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
139,
421
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### hun-epo\n\n\n* source group: Hungarian\n* target group: Esperanto\n* OPUS readme: hun-epo\n* model: transformer-align\n* source language(s): hun\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.9, chr-F: 0.378### System Info:\n\n\n* hf\\_name: hun-epo\n* source\\_languages: hun\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hu', 'eo']\n* src\\_constituents: {'hun'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hun\n* tgt\\_alpha3: epo\n* short\\_pair: hu-eo\n* chrF2\\_score: 0.37799999999999995\n* bleu: 17.9\n* brevity\\_penalty: 0.934\n* ref\\_len: 76005.0\n* src\\_name: Hungarian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: hu\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: hun-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-hu-fi
* source languages: hu
* target languages: fi
* OPUS readme: [hu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.fi | 48.2 | 0.700 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hu-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hu",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hu #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hu-fi
* source languages: hu
* target languages: fi
* OPUS readme: hu-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 48.2, chr-F: 0.700
| [
"### opus-mt-hu-fi\n\n\n* source languages: hu\n* target languages: fi\n* OPUS readme: hu-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.2, chr-F: 0.700"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hu-fi\n\n\n* source languages: hu\n* target languages: fi\n* OPUS readme: hu-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.2, chr-F: 0.700"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hu-fi\n\n\n* source languages: hu\n* target languages: fi\n* OPUS readme: hu-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.2, chr-F: 0.700"
] |
translation | transformers |
### opus-mt-hu-fr
* source languages: hu
* target languages: fr
* OPUS readme: [hu-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.fr | 50.3 | 0.660 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hu-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hu",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hu #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hu-fr
* source languages: hu
* target languages: fr
* OPUS readme: hu-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 50.3, chr-F: 0.660
| [
"### opus-mt-hu-fr\n\n\n* source languages: hu\n* target languages: fr\n* OPUS readme: hu-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.3, chr-F: 0.660"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hu-fr\n\n\n* source languages: hu\n* target languages: fr\n* OPUS readme: hu-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.3, chr-F: 0.660"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hu-fr\n\n\n* source languages: hu\n* target languages: fr\n* OPUS readme: hu-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.3, chr-F: 0.660"
] |
translation | transformers |
### opus-mt-hu-sv
* source languages: hu
* target languages: sv
* OPUS readme: [hu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hu-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/hu-sv/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-sv/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hu-sv/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hu.sv | 52.6 | 0.686 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hu-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hu",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hu #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hu-sv
* source languages: hu
* target languages: sv
* OPUS readme: hu-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 52.6, chr-F: 0.686
| [
"### opus-mt-hu-sv\n\n\n* source languages: hu\n* target languages: sv\n* OPUS readme: hu-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.6, chr-F: 0.686"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hu-sv\n\n\n* source languages: hu\n* target languages: sv\n* OPUS readme: hu-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.6, chr-F: 0.686"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hu-sv\n\n\n* source languages: hu\n* target languages: sv\n* OPUS readme: hu-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.6, chr-F: 0.686"
] |
translation | transformers |
### hun-ukr
* source group: Hungarian
* target group: Ukrainian
* OPUS readme: [hun-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hun-ukr/README.md)
* model: transformer-align
* source language(s): hun
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hun.ukr | 41.2 | 0.611 |
### System Info:
- hf_name: hun-ukr
- source_languages: hun
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hun-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hu', 'uk']
- src_constituents: {'hun'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hun-ukr/opus-2020-06-17.test.txt
- src_alpha3: hun
- tgt_alpha3: ukr
- short_pair: hu-uk
- chrF2_score: 0.611
- bleu: 41.2
- brevity_penalty: 0.966
- ref_len: 2568.0
- src_name: Hungarian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: hu
- tgt_alpha2: uk
- prefer_old: False
- long_pair: hun-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["hu", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hu-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hu",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hu",
"uk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hu #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### hun-ukr
* source group: Hungarian
* target group: Ukrainian
* OPUS readme: hun-ukr
* model: transformer-align
* source language(s): hun
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 41.2, chr-F: 0.611
### System Info:
* hf\_name: hun-ukr
* source\_languages: hun
* target\_languages: ukr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['hu', 'uk']
* src\_constituents: {'hun'}
* tgt\_constituents: {'ukr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: hun
* tgt\_alpha3: ukr
* short\_pair: hu-uk
* chrF2\_score: 0.611
* bleu: 41.2
* brevity\_penalty: 0.966
* ref\_len: 2568.0
* src\_name: Hungarian
* tgt\_name: Ukrainian
* train\_date: 2020-06-17
* src\_alpha2: hu
* tgt\_alpha2: uk
* prefer\_old: False
* long\_pair: hun-ukr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### hun-ukr\n\n\n* source group: Hungarian\n* target group: Ukrainian\n* OPUS readme: hun-ukr\n* model: transformer-align\n* source language(s): hun\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.2, chr-F: 0.611",
"### System Info:\n\n\n* hf\\_name: hun-ukr\n* source\\_languages: hun\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hu', 'uk']\n* src\\_constituents: {'hun'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hun\n* tgt\\_alpha3: ukr\n* short\\_pair: hu-uk\n* chrF2\\_score: 0.611\n* bleu: 41.2\n* brevity\\_penalty: 0.966\n* ref\\_len: 2568.0\n* src\\_name: Hungarian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: hu\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: hun-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### hun-ukr\n\n\n* source group: Hungarian\n* target group: Ukrainian\n* OPUS readme: hun-ukr\n* model: transformer-align\n* source language(s): hun\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.2, chr-F: 0.611",
"### System Info:\n\n\n* hf\\_name: hun-ukr\n* source\\_languages: hun\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hu', 'uk']\n* src\\_constituents: {'hun'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hun\n* tgt\\_alpha3: ukr\n* short\\_pair: hu-uk\n* chrF2\\_score: 0.611\n* bleu: 41.2\n* brevity\\_penalty: 0.966\n* ref\\_len: 2568.0\n* src\\_name: Hungarian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: hu\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: hun-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
137,
401
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hu #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### hun-ukr\n\n\n* source group: Hungarian\n* target group: Ukrainian\n* OPUS readme: hun-ukr\n* model: transformer-align\n* source language(s): hun\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.2, chr-F: 0.611### System Info:\n\n\n* hf\\_name: hun-ukr\n* source\\_languages: hun\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hu', 'uk']\n* src\\_constituents: {'hun'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hun\n* tgt\\_alpha3: ukr\n* short\\_pair: hu-uk\n* chrF2\\_score: 0.611\n* bleu: 41.2\n* brevity\\_penalty: 0.966\n* ref\\_len: 2568.0\n* src\\_name: Hungarian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: hu\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: hun-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-hy-en
* source languages: hy
* target languages: en
* OPUS readme: [hy-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hy-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/hy-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.hy.en | 29.5 | 0.466 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hy-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hy",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hy #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-hy-en
* source languages: hy
* target languages: en
* OPUS readme: hy-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.5, chr-F: 0.466
| [
"### opus-mt-hy-en\n\n\n* source languages: hy\n* target languages: en\n* OPUS readme: hy-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.5, chr-F: 0.466"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hy #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-hy-en\n\n\n* source languages: hy\n* target languages: en\n* OPUS readme: hy-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.5, chr-F: 0.466"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hy #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-hy-en\n\n\n* source languages: hy\n* target languages: en\n* OPUS readme: hy-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.5, chr-F: 0.466"
] |
translation | transformers |
### hye-rus
* source group: Armenian
* target group: Russian
* OPUS readme: [hye-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hye-rus/README.md)
* model: transformer-align
* source language(s): hye hye_Latn
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hye.rus | 25.6 | 0.476 |
### System Info:
- hf_name: hye-rus
- source_languages: hye
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hye-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hy', 'ru']
- src_constituents: {'hye', 'hye_Latn'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hye-rus/opus-2020-06-16.test.txt
- src_alpha3: hye
- tgt_alpha3: rus
- short_pair: hy-ru
- chrF2_score: 0.47600000000000003
- bleu: 25.6
- brevity_penalty: 0.929
- ref_len: 1624.0
- src_name: Armenian
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: hy
- tgt_alpha2: ru
- prefer_old: False
- long_pair: hye-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["hy", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-hy-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hy",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hy",
"ru"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hy #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### hye-rus
* source group: Armenian
* target group: Russian
* OPUS readme: hye-rus
* model: transformer-align
* source language(s): hye hye\_Latn
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.6, chr-F: 0.476
### System Info:
* hf\_name: hye-rus
* source\_languages: hye
* target\_languages: rus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['hy', 'ru']
* src\_constituents: {'hye', 'hye\_Latn'}
* tgt\_constituents: {'rus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: hye
* tgt\_alpha3: rus
* short\_pair: hy-ru
* chrF2\_score: 0.47600000000000003
* bleu: 25.6
* brevity\_penalty: 0.929
* ref\_len: 1624.0
* src\_name: Armenian
* tgt\_name: Russian
* train\_date: 2020-06-16
* src\_alpha2: hy
* tgt\_alpha2: ru
* prefer\_old: False
* long\_pair: hye-rus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### hye-rus\n\n\n* source group: Armenian\n* target group: Russian\n* OPUS readme: hye-rus\n* model: transformer-align\n* source language(s): hye hye\\_Latn\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.476",
"### System Info:\n\n\n* hf\\_name: hye-rus\n* source\\_languages: hye\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hy', 'ru']\n* src\\_constituents: {'hye', 'hye\\_Latn'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hye\n* tgt\\_alpha3: rus\n* short\\_pair: hy-ru\n* chrF2\\_score: 0.47600000000000003\n* bleu: 25.6\n* brevity\\_penalty: 0.929\n* ref\\_len: 1624.0\n* src\\_name: Armenian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: hy\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: hye-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hy #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### hye-rus\n\n\n* source group: Armenian\n* target group: Russian\n* OPUS readme: hye-rus\n* model: transformer-align\n* source language(s): hye hye\\_Latn\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.476",
"### System Info:\n\n\n* hf\\_name: hye-rus\n* source\\_languages: hye\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hy', 'ru']\n* src\\_constituents: {'hye', 'hye\\_Latn'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hye\n* tgt\\_alpha3: rus\n* short\\_pair: hy-ru\n* chrF2\\_score: 0.47600000000000003\n* bleu: 25.6\n* brevity\\_penalty: 0.929\n* ref\\_len: 1624.0\n* src\\_name: Armenian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: hy\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: hye-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
141,
415
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hy #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### hye-rus\n\n\n* source group: Armenian\n* target group: Russian\n* OPUS readme: hye-rus\n* model: transformer-align\n* source language(s): hye hye\\_Latn\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.476### System Info:\n\n\n* hf\\_name: hye-rus\n* source\\_languages: hye\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hy', 'ru']\n* src\\_constituents: {'hye', 'hye\\_Latn'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: hye\n* tgt\\_alpha3: rus\n* short\\_pair: hy-ru\n* chrF2\\_score: 0.47600000000000003\n* bleu: 25.6\n* brevity\\_penalty: 0.929\n* ref\\_len: 1624.0\n* src\\_name: Armenian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: hy\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: hye-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-id-en
* source languages: id
* target languages: en
* OPUS readme: [id-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.id.en | 47.7 | 0.647 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-id-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #id #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-id-en
* source languages: id
* target languages: en
* OPUS readme: id-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 47.7, chr-F: 0.647
| [
"### opus-mt-id-en\n\n\n* source languages: id\n* target languages: en\n* OPUS readme: id-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.7, chr-F: 0.647"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-id-en\n\n\n* source languages: id\n* target languages: en\n* OPUS readme: id-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.7, chr-F: 0.647"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-id-en\n\n\n* source languages: id\n* target languages: en\n* OPUS readme: id-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.7, chr-F: 0.647"
] |
translation | transformers |
### opus-mt-id-es
* source languages: id
* target languages: es
* OPUS readme: [id-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.id.es | 21.8 | 0.483 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-id-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #id #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-id-es
* source languages: id
* target languages: es
* OPUS readme: id-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.8, chr-F: 0.483
| [
"### opus-mt-id-es\n\n\n* source languages: id\n* target languages: es\n* OPUS readme: id-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.483"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-id-es\n\n\n* source languages: id\n* target languages: es\n* OPUS readme: id-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.483"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-id-es\n\n\n* source languages: id\n* target languages: es\n* OPUS readme: id-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.483"
] |
translation | transformers |
### opus-mt-id-fi
* source languages: id
* target languages: fi
* OPUS readme: [id-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.id.fi | 27.4 | 0.522 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-id-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #id #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-id-fi
* source languages: id
* target languages: fi
* OPUS readme: id-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.4, chr-F: 0.522
| [
"### opus-mt-id-fi\n\n\n* source languages: id\n* target languages: fi\n* OPUS readme: id-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.522"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-id-fi\n\n\n* source languages: id\n* target languages: fi\n* OPUS readme: id-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.522"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-id-fi\n\n\n* source languages: id\n* target languages: fi\n* OPUS readme: id-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.522"
] |
translation | transformers |
### opus-mt-id-fr
* source languages: id
* target languages: fr
* OPUS readme: [id-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.id.fr | 43.8 | 0.616 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-id-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #id #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-id-fr
* source languages: id
* target languages: fr
* OPUS readme: id-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 43.8, chr-F: 0.616
| [
"### opus-mt-id-fr\n\n\n* source languages: id\n* target languages: fr\n* OPUS readme: id-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.8, chr-F: 0.616"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-id-fr\n\n\n* source languages: id\n* target languages: fr\n* OPUS readme: id-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.8, chr-F: 0.616"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-id-fr\n\n\n* source languages: id\n* target languages: fr\n* OPUS readme: id-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.8, chr-F: 0.616"
] |
translation | transformers |
### opus-mt-id-sv
* source languages: id
* target languages: sv
* OPUS readme: [id-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/id-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/id-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.id.sv | 32.7 | 0.527 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-id-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #id #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-id-sv
* source languages: id
* target languages: sv
* OPUS readme: id-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.7, chr-F: 0.527
| [
"### opus-mt-id-sv\n\n\n* source languages: id\n* target languages: sv\n* OPUS readme: id-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.7, chr-F: 0.527"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-id-sv\n\n\n* source languages: id\n* target languages: sv\n* OPUS readme: id-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.7, chr-F: 0.527"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #id #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-id-sv\n\n\n* source languages: id\n* target languages: sv\n* OPUS readme: id-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.7, chr-F: 0.527"
] |
translation | transformers |
### opus-mt-ig-de
* source languages: ig
* target languages: de
* OPUS readme: [ig-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.de | 20.1 | 0.393 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ig-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ig",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ig #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ig-de
* source languages: ig
* target languages: de
* OPUS readme: ig-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.1, chr-F: 0.393
| [
"### opus-mt-ig-de\n\n\n* source languages: ig\n* target languages: de\n* OPUS readme: ig-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.1, chr-F: 0.393"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ig-de\n\n\n* source languages: ig\n* target languages: de\n* OPUS readme: ig-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.1, chr-F: 0.393"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ig-de\n\n\n* source languages: ig\n* target languages: de\n* OPUS readme: ig-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.1, chr-F: 0.393"
] |
translation | transformers |
### opus-mt-ig-en
* source languages: ig
* target languages: en
* OPUS readme: [ig-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.en | 36.7 | 0.520 |
| Tatoeba.ig.en | 46.3 | 0.528 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ig-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ig",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ig #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ig-en
* source languages: ig
* target languages: en
* OPUS readme: ig-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.7, chr-F: 0.520
testset: URL, BLEU: 46.3, chr-F: 0.528
| [
"### opus-mt-ig-en\n\n\n* source languages: ig\n* target languages: en\n* OPUS readme: ig-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.7, chr-F: 0.520\ntestset: URL, BLEU: 46.3, chr-F: 0.528"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ig-en\n\n\n* source languages: ig\n* target languages: en\n* OPUS readme: ig-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.7, chr-F: 0.520\ntestset: URL, BLEU: 46.3, chr-F: 0.528"
] | [
52,
131
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ig-en\n\n\n* source languages: ig\n* target languages: en\n* OPUS readme: ig-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.7, chr-F: 0.520\ntestset: URL, BLEU: 46.3, chr-F: 0.528"
] |
translation | transformers |
### opus-mt-ig-es
* source languages: ig
* target languages: es
* OPUS readme: [ig-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.es | 24.6 | 0.420 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ig-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ig",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ig #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ig-es
* source languages: ig
* target languages: es
* OPUS readme: ig-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.6, chr-F: 0.420
| [
"### opus-mt-ig-es\n\n\n* source languages: ig\n* target languages: es\n* OPUS readme: ig-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.420"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ig-es\n\n\n* source languages: ig\n* target languages: es\n* OPUS readme: ig-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.420"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ig-es\n\n\n* source languages: ig\n* target languages: es\n* OPUS readme: ig-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.420"
] |
translation | transformers |
### opus-mt-ig-fi
* source languages: ig
* target languages: fi
* OPUS readme: [ig-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.fi | 23.5 | 0.451 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ig-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ig",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ig #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ig-fi
* source languages: ig
* target languages: fi
* OPUS readme: ig-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.5, chr-F: 0.451
| [
"### opus-mt-ig-fi\n\n\n* source languages: ig\n* target languages: fi\n* OPUS readme: ig-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.451"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ig-fi\n\n\n* source languages: ig\n* target languages: fi\n* OPUS readme: ig-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.451"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ig-fi\n\n\n* source languages: ig\n* target languages: fi\n* OPUS readme: ig-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.451"
] |
translation | transformers |
### opus-mt-ig-fr
* source languages: ig
* target languages: fr
* OPUS readme: [ig-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.fr | 25.6 | 0.427 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ig-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ig",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ig #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ig-fr
* source languages: ig
* target languages: fr
* OPUS readme: ig-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.6, chr-F: 0.427
| [
"### opus-mt-ig-fr\n\n\n* source languages: ig\n* target languages: fr\n* OPUS readme: ig-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.427"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ig-fr\n\n\n* source languages: ig\n* target languages: fr\n* OPUS readme: ig-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.427"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ig-fr\n\n\n* source languages: ig\n* target languages: fr\n* OPUS readme: ig-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.427"
] |
translation | transformers |
### opus-mt-ig-sv
* source languages: ig
* target languages: sv
* OPUS readme: [ig-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ig-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ig-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ig-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ig.sv | 27.0 | 0.451 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ig-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ig",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ig #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ig-sv
* source languages: ig
* target languages: sv
* OPUS readme: ig-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.0, chr-F: 0.451
| [
"### opus-mt-ig-sv\n\n\n* source languages: ig\n* target languages: sv\n* OPUS readme: ig-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.0, chr-F: 0.451"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ig-sv\n\n\n* source languages: ig\n* target languages: sv\n* OPUS readme: ig-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.0, chr-F: 0.451"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ig #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ig-sv\n\n\n* source languages: ig\n* target languages: sv\n* OPUS readme: ig-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.0, chr-F: 0.451"
] |
translation | transformers |
### iir-eng
* source group: Indo-Iranian languages
* target group: English
* OPUS readme: [iir-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-eng/README.md)
* model: transformer
* source language(s): asm awa ben bho gom guj hif_Latn hin jdt_Cyrl kur_Arab kur_Latn mai mar npi ori oss pan_Guru pes pes_Latn pes_Thaa pnb pus rom san_Deva sin snd_Arab tgk_Cyrl tly_Latn urd zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 8.1 | 0.324 |
| newsdev2019-engu-gujeng.guj.eng | 8.1 | 0.309 |
| newstest2014-hien-hineng.hin.eng | 12.1 | 0.380 |
| newstest2019-guen-gujeng.guj.eng | 6.0 | 0.280 |
| Tatoeba-test.asm-eng.asm.eng | 13.9 | 0.327 |
| Tatoeba-test.awa-eng.awa.eng | 7.0 | 0.219 |
| Tatoeba-test.ben-eng.ben.eng | 42.5 | 0.576 |
| Tatoeba-test.bho-eng.bho.eng | 27.3 | 0.452 |
| Tatoeba-test.fas-eng.fas.eng | 5.6 | 0.262 |
| Tatoeba-test.guj-eng.guj.eng | 15.9 | 0.350 |
| Tatoeba-test.hif-eng.hif.eng | 10.1 | 0.247 |
| Tatoeba-test.hin-eng.hin.eng | 36.5 | 0.544 |
| Tatoeba-test.jdt-eng.jdt.eng | 11.4 | 0.094 |
| Tatoeba-test.kok-eng.kok.eng | 6.6 | 0.256 |
| Tatoeba-test.kur-eng.kur.eng | 3.4 | 0.149 |
| Tatoeba-test.lah-eng.lah.eng | 17.4 | 0.301 |
| Tatoeba-test.mai-eng.mai.eng | 65.4 | 0.703 |
| Tatoeba-test.mar-eng.mar.eng | 22.5 | 0.468 |
| Tatoeba-test.multi.eng | 21.3 | 0.424 |
| Tatoeba-test.nep-eng.nep.eng | 3.4 | 0.185 |
| Tatoeba-test.ori-eng.ori.eng | 4.8 | 0.244 |
| Tatoeba-test.oss-eng.oss.eng | 1.6 | 0.173 |
| Tatoeba-test.pan-eng.pan.eng | 14.8 | 0.348 |
| Tatoeba-test.pus-eng.pus.eng | 1.1 | 0.182 |
| Tatoeba-test.rom-eng.rom.eng | 2.8 | 0.185 |
| Tatoeba-test.san-eng.san.eng | 2.8 | 0.185 |
| Tatoeba-test.sin-eng.sin.eng | 22.8 | 0.474 |
| Tatoeba-test.snd-eng.snd.eng | 8.2 | 0.287 |
| Tatoeba-test.tgk-eng.tgk.eng | 11.9 | 0.321 |
| Tatoeba-test.tly-eng.tly.eng | 0.9 | 0.076 |
| Tatoeba-test.urd-eng.urd.eng | 23.9 | 0.438 |
| Tatoeba-test.zza-eng.zza.eng | 0.6 | 0.098 |
### System Info:
- hf_name: iir-eng
- source_languages: iir
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir', 'en']
- src_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-eng/opus2m-2020-08-01.test.txt
- src_alpha3: iir
- tgt_alpha3: eng
- short_pair: iir-en
- chrF2_score: 0.424
- bleu: 21.3
- brevity_penalty: 1.0
- ref_len: 67026.0
- src_name: Indo-Iranian languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: iir
- tgt_alpha2: en
- prefer_old: False
- long_pair: iir-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bn", "or", "gu", "mr", "ur", "hi", "ps", "os", "as", "si", "iir", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-iir-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #ps #os #as #si #iir #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### iir-eng
* source group: Indo-Iranian languages
* target group: English
* OPUS readme: iir-eng
* model: transformer
* source language(s): asm awa ben bho gom guj hif\_Latn hin jdt\_Cyrl kur\_Arab kur\_Latn mai mar npi ori oss pan\_Guru pes pes\_Latn pes\_Thaa pnb pus rom san\_Deva sin snd\_Arab tgk\_Cyrl tly\_Latn urd zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 8.1, chr-F: 0.324
testset: URL, BLEU: 8.1, chr-F: 0.309
testset: URL, BLEU: 12.1, chr-F: 0.380
testset: URL, BLEU: 6.0, chr-F: 0.280
testset: URL, BLEU: 13.9, chr-F: 0.327
testset: URL, BLEU: 7.0, chr-F: 0.219
testset: URL, BLEU: 42.5, chr-F: 0.576
testset: URL, BLEU: 27.3, chr-F: 0.452
testset: URL, BLEU: 5.6, chr-F: 0.262
testset: URL, BLEU: 15.9, chr-F: 0.350
testset: URL, BLEU: 10.1, chr-F: 0.247
testset: URL, BLEU: 36.5, chr-F: 0.544
testset: URL, BLEU: 11.4, chr-F: 0.094
testset: URL, BLEU: 6.6, chr-F: 0.256
testset: URL, BLEU: 3.4, chr-F: 0.149
testset: URL, BLEU: 17.4, chr-F: 0.301
testset: URL, BLEU: 65.4, chr-F: 0.703
testset: URL, BLEU: 22.5, chr-F: 0.468
testset: URL, BLEU: 21.3, chr-F: 0.424
testset: URL, BLEU: 3.4, chr-F: 0.185
testset: URL, BLEU: 4.8, chr-F: 0.244
testset: URL, BLEU: 1.6, chr-F: 0.173
testset: URL, BLEU: 14.8, chr-F: 0.348
testset: URL, BLEU: 1.1, chr-F: 0.182
testset: URL, BLEU: 2.8, chr-F: 0.185
testset: URL, BLEU: 2.8, chr-F: 0.185
testset: URL, BLEU: 22.8, chr-F: 0.474
testset: URL, BLEU: 8.2, chr-F: 0.287
testset: URL, BLEU: 11.9, chr-F: 0.321
testset: URL, BLEU: 0.9, chr-F: 0.076
testset: URL, BLEU: 23.9, chr-F: 0.438
testset: URL, BLEU: 0.6, chr-F: 0.098
### System Info:
* hf\_name: iir-eng
* source\_languages: iir
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir', 'en']
* src\_constituents: {'pnb', 'gom', 'ben', 'hif\_Latn', 'ori', 'guj', 'pan\_Guru', 'snd\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\_Arab', 'tgk\_Cyrl', 'hin', 'kur\_Latn', 'pes\_Thaa', 'pus', 'san\_Deva', 'oss', 'tly\_Latn', 'jdt\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\_Latn', 'awa', 'sin'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: iir
* tgt\_alpha3: eng
* short\_pair: iir-en
* chrF2\_score: 0.424
* bleu: 21.3
* brevity\_penalty: 1.0
* ref\_len: 67026.0
* src\_name: Indo-Iranian languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: iir
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: iir-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### iir-eng\n\n\n* source group: Indo-Iranian languages\n* target group: English\n* OPUS readme: iir-eng\n* model: transformer\n* source language(s): asm awa ben bho gom guj hif\\_Latn hin jdt\\_Cyrl kur\\_Arab kur\\_Latn mai mar npi ori oss pan\\_Guru pes pes\\_Latn pes\\_Thaa pnb pus rom san\\_Deva sin snd\\_Arab tgk\\_Cyrl tly\\_Latn urd zza\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.1, chr-F: 0.324\ntestset: URL, BLEU: 8.1, chr-F: 0.309\ntestset: URL, BLEU: 12.1, chr-F: 0.380\ntestset: URL, BLEU: 6.0, chr-F: 0.280\ntestset: URL, BLEU: 13.9, chr-F: 0.327\ntestset: URL, BLEU: 7.0, chr-F: 0.219\ntestset: URL, BLEU: 42.5, chr-F: 0.576\ntestset: URL, BLEU: 27.3, chr-F: 0.452\ntestset: URL, BLEU: 5.6, chr-F: 0.262\ntestset: URL, BLEU: 15.9, chr-F: 0.350\ntestset: URL, BLEU: 10.1, chr-F: 0.247\ntestset: URL, BLEU: 36.5, chr-F: 0.544\ntestset: URL, BLEU: 11.4, chr-F: 0.094\ntestset: URL, BLEU: 6.6, chr-F: 0.256\ntestset: URL, BLEU: 3.4, chr-F: 0.149\ntestset: URL, BLEU: 17.4, chr-F: 0.301\ntestset: URL, BLEU: 65.4, chr-F: 0.703\ntestset: URL, BLEU: 22.5, chr-F: 0.468\ntestset: URL, BLEU: 21.3, chr-F: 0.424\ntestset: URL, BLEU: 3.4, chr-F: 0.185\ntestset: URL, BLEU: 4.8, chr-F: 0.244\ntestset: URL, BLEU: 1.6, chr-F: 0.173\ntestset: URL, BLEU: 14.8, chr-F: 0.348\ntestset: URL, BLEU: 1.1, chr-F: 0.182\ntestset: URL, BLEU: 2.8, chr-F: 0.185\ntestset: URL, BLEU: 2.8, chr-F: 0.185\ntestset: URL, BLEU: 22.8, chr-F: 0.474\ntestset: URL, BLEU: 8.2, chr-F: 0.287\ntestset: URL, BLEU: 11.9, chr-F: 0.321\ntestset: URL, BLEU: 0.9, chr-F: 0.076\ntestset: URL, BLEU: 23.9, chr-F: 0.438\ntestset: URL, BLEU: 0.6, chr-F: 0.098",
"### System Info:\n\n\n* hf\\_name: iir-eng\n* source\\_languages: iir\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir', 'en']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: iir\n* tgt\\_alpha3: eng\n* short\\_pair: iir-en\n* chrF2\\_score: 0.424\n* bleu: 21.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 67026.0\n* src\\_name: Indo-Iranian languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: iir\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: iir-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #ps #os #as #si #iir #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### iir-eng\n\n\n* source group: Indo-Iranian languages\n* target group: English\n* OPUS readme: iir-eng\n* model: transformer\n* source language(s): asm awa ben bho gom guj hif\\_Latn hin jdt\\_Cyrl kur\\_Arab kur\\_Latn mai mar npi ori oss pan\\_Guru pes pes\\_Latn pes\\_Thaa pnb pus rom san\\_Deva sin snd\\_Arab tgk\\_Cyrl tly\\_Latn urd zza\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.1, chr-F: 0.324\ntestset: URL, BLEU: 8.1, chr-F: 0.309\ntestset: URL, BLEU: 12.1, chr-F: 0.380\ntestset: URL, BLEU: 6.0, chr-F: 0.280\ntestset: URL, BLEU: 13.9, chr-F: 0.327\ntestset: URL, BLEU: 7.0, chr-F: 0.219\ntestset: URL, BLEU: 42.5, chr-F: 0.576\ntestset: URL, BLEU: 27.3, chr-F: 0.452\ntestset: URL, BLEU: 5.6, chr-F: 0.262\ntestset: URL, BLEU: 15.9, chr-F: 0.350\ntestset: URL, BLEU: 10.1, chr-F: 0.247\ntestset: URL, BLEU: 36.5, chr-F: 0.544\ntestset: URL, BLEU: 11.4, chr-F: 0.094\ntestset: URL, BLEU: 6.6, chr-F: 0.256\ntestset: URL, BLEU: 3.4, chr-F: 0.149\ntestset: URL, BLEU: 17.4, chr-F: 0.301\ntestset: URL, BLEU: 65.4, chr-F: 0.703\ntestset: URL, BLEU: 22.5, chr-F: 0.468\ntestset: URL, BLEU: 21.3, chr-F: 0.424\ntestset: URL, BLEU: 3.4, chr-F: 0.185\ntestset: URL, BLEU: 4.8, chr-F: 0.244\ntestset: URL, BLEU: 1.6, chr-F: 0.173\ntestset: URL, BLEU: 14.8, chr-F: 0.348\ntestset: URL, BLEU: 1.1, chr-F: 0.182\ntestset: URL, BLEU: 2.8, chr-F: 0.185\ntestset: URL, BLEU: 2.8, chr-F: 0.185\ntestset: URL, BLEU: 22.8, chr-F: 0.474\ntestset: URL, BLEU: 8.2, chr-F: 0.287\ntestset: URL, BLEU: 11.9, chr-F: 0.321\ntestset: URL, BLEU: 0.9, chr-F: 0.076\ntestset: URL, BLEU: 23.9, chr-F: 0.438\ntestset: URL, BLEU: 0.6, chr-F: 0.098",
"### System Info:\n\n\n* hf\\_name: iir-eng\n* source\\_languages: iir\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir', 'en']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: iir\n* tgt\\_alpha3: eng\n* short\\_pair: iir-en\n* chrF2\\_score: 0.424\n* bleu: 21.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 67026.0\n* src\\_name: Indo-Iranian languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: iir\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: iir-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
72,
923,
625
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #ps #os #as #si #iir #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### iir-eng\n\n\n* source group: Indo-Iranian languages\n* target group: English\n* OPUS readme: iir-eng\n* model: transformer\n* source language(s): asm awa ben bho gom guj hif\\_Latn hin jdt\\_Cyrl kur\\_Arab kur\\_Latn mai mar npi ori oss pan\\_Guru pes pes\\_Latn pes\\_Thaa pnb pus rom san\\_Deva sin snd\\_Arab tgk\\_Cyrl tly\\_Latn urd zza\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.1, chr-F: 0.324\ntestset: URL, BLEU: 8.1, chr-F: 0.309\ntestset: URL, BLEU: 12.1, chr-F: 0.380\ntestset: URL, BLEU: 6.0, chr-F: 0.280\ntestset: URL, BLEU: 13.9, chr-F: 0.327\ntestset: URL, BLEU: 7.0, chr-F: 0.219\ntestset: URL, BLEU: 42.5, chr-F: 0.576\ntestset: URL, BLEU: 27.3, chr-F: 0.452\ntestset: URL, BLEU: 5.6, chr-F: 0.262\ntestset: URL, BLEU: 15.9, chr-F: 0.350\ntestset: URL, BLEU: 10.1, chr-F: 0.247\ntestset: URL, BLEU: 36.5, chr-F: 0.544\ntestset: URL, BLEU: 11.4, chr-F: 0.094\ntestset: URL, BLEU: 6.6, chr-F: 0.256\ntestset: URL, BLEU: 3.4, chr-F: 0.149\ntestset: URL, BLEU: 17.4, chr-F: 0.301\ntestset: URL, BLEU: 65.4, chr-F: 0.703\ntestset: URL, BLEU: 22.5, chr-F: 0.468\ntestset: URL, BLEU: 21.3, chr-F: 0.424\ntestset: URL, BLEU: 3.4, chr-F: 0.185\ntestset: URL, BLEU: 4.8, chr-F: 0.244\ntestset: URL, BLEU: 1.6, chr-F: 0.173\ntestset: URL, BLEU: 14.8, chr-F: 0.348\ntestset: URL, BLEU: 1.1, chr-F: 0.182\ntestset: URL, BLEU: 2.8, chr-F: 0.185\ntestset: URL, BLEU: 2.8, chr-F: 0.185\ntestset: URL, BLEU: 22.8, chr-F: 0.474\ntestset: URL, BLEU: 8.2, chr-F: 0.287\ntestset: URL, BLEU: 11.9, chr-F: 0.321\ntestset: URL, BLEU: 0.9, chr-F: 0.076\ntestset: URL, BLEU: 23.9, chr-F: 0.438\ntestset: URL, BLEU: 0.6, chr-F: 0.098### System Info:\n\n\n* hf\\_name: iir-eng\n* source\\_languages: iir\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir', 'en']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: iir\n* tgt\\_alpha3: eng\n* short\\_pair: iir-en\n* chrF2\\_score: 0.424\n* bleu: 21.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 67026.0\n* src\\_name: Indo-Iranian languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: iir\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: iir-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### iir-iir
* source group: Indo-Iranian languages
* target group: Indo-Iranian languages
* OPUS readme: [iir-iir](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-iir/README.md)
* model: transformer
* source language(s): asm hin mar urd zza
* target language(s): asm hin mar urd zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.asm-hin.asm.hin | 3.5 | 0.202 |
| Tatoeba-test.asm-zza.asm.zza | 12.4 | 0.014 |
| Tatoeba-test.hin-asm.hin.asm | 6.2 | 0.238 |
| Tatoeba-test.hin-mar.hin.mar | 27.0 | 0.560 |
| Tatoeba-test.hin-urd.hin.urd | 21.4 | 0.507 |
| Tatoeba-test.mar-hin.mar.hin | 13.4 | 0.463 |
| Tatoeba-test.multi.multi | 17.7 | 0.460 |
| Tatoeba-test.urd-hin.urd.hin | 13.4 | 0.363 |
| Tatoeba-test.zza-asm.zza.asm | 5.3 | 0.000 |
### System Info:
- hf_name: iir-iir
- source_languages: iir
- target_languages: iir
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/iir-iir/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']
- src_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur_Arab', 'tgk_Cyrl', 'hin', 'kur_Latn', 'pes_Thaa', 'pus', 'san_Deva', 'oss', 'tly_Latn', 'jdt_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes_Latn', 'awa', 'sin'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/iir-iir/opus-2020-07-27.test.txt
- src_alpha3: iir
- tgt_alpha3: iir
- short_pair: iir-iir
- chrF2_score: 0.46
- bleu: 17.7
- brevity_penalty: 1.0
- ref_len: 4992.0
- src_name: Indo-Iranian languages
- tgt_name: Indo-Iranian languages
- train_date: 2020-07-27
- src_alpha2: iir
- tgt_alpha2: iir
- prefer_old: False
- long_pair: iir-iir
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bn", "or", "gu", "mr", "ur", "hi", "ps", "os", "as", "si", "iir"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-iir-iir | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"ps",
"os",
"as",
"si",
"iir"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #ps #os #as #si #iir #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### iir-iir
* source group: Indo-Iranian languages
* target group: Indo-Iranian languages
* OPUS readme: iir-iir
* model: transformer
* source language(s): asm hin mar urd zza
* target language(s): asm hin mar urd zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 3.5, chr-F: 0.202
testset: URL, BLEU: 12.4, chr-F: 0.014
testset: URL, BLEU: 6.2, chr-F: 0.238
testset: URL, BLEU: 27.0, chr-F: 0.560
testset: URL, BLEU: 21.4, chr-F: 0.507
testset: URL, BLEU: 13.4, chr-F: 0.463
testset: URL, BLEU: 17.7, chr-F: 0.460
testset: URL, BLEU: 13.4, chr-F: 0.363
testset: URL, BLEU: 5.3, chr-F: 0.000
### System Info:
* hf\_name: iir-iir
* source\_languages: iir
* target\_languages: iir
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']
* src\_constituents: {'pnb', 'gom', 'ben', 'hif\_Latn', 'ori', 'guj', 'pan\_Guru', 'snd\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\_Arab', 'tgk\_Cyrl', 'hin', 'kur\_Latn', 'pes\_Thaa', 'pus', 'san\_Deva', 'oss', 'tly\_Latn', 'jdt\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\_Latn', 'awa', 'sin'}
* tgt\_constituents: {'pnb', 'gom', 'ben', 'hif\_Latn', 'ori', 'guj', 'pan\_Guru', 'snd\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\_Arab', 'tgk\_Cyrl', 'hin', 'kur\_Latn', 'pes\_Thaa', 'pus', 'san\_Deva', 'oss', 'tly\_Latn', 'jdt\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\_Latn', 'awa', 'sin'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: iir
* tgt\_alpha3: iir
* short\_pair: iir-iir
* chrF2\_score: 0.46
* bleu: 17.7
* brevity\_penalty: 1.0
* ref\_len: 4992.0
* src\_name: Indo-Iranian languages
* tgt\_name: Indo-Iranian languages
* train\_date: 2020-07-27
* src\_alpha2: iir
* tgt\_alpha2: iir
* prefer\_old: False
* long\_pair: iir-iir
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### iir-iir\n\n\n* source group: Indo-Iranian languages\n* target group: Indo-Iranian languages\n* OPUS readme: iir-iir\n* model: transformer\n* source language(s): asm hin mar urd zza\n* target language(s): asm hin mar urd zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 3.5, chr-F: 0.202\ntestset: URL, BLEU: 12.4, chr-F: 0.014\ntestset: URL, BLEU: 6.2, chr-F: 0.238\ntestset: URL, BLEU: 27.0, chr-F: 0.560\ntestset: URL, BLEU: 21.4, chr-F: 0.507\ntestset: URL, BLEU: 13.4, chr-F: 0.463\ntestset: URL, BLEU: 17.7, chr-F: 0.460\ntestset: URL, BLEU: 13.4, chr-F: 0.363\ntestset: URL, BLEU: 5.3, chr-F: 0.000",
"### System Info:\n\n\n* hf\\_name: iir-iir\n* source\\_languages: iir\n* target\\_languages: iir\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: iir\n* tgt\\_alpha3: iir\n* short\\_pair: iir-iir\n* chrF2\\_score: 0.46\n* bleu: 17.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 4992.0\n* src\\_name: Indo-Iranian languages\n* tgt\\_name: Indo-Iranian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: iir\n* tgt\\_alpha2: iir\n* prefer\\_old: False\n* long\\_pair: iir-iir\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #ps #os #as #si #iir #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### iir-iir\n\n\n* source group: Indo-Iranian languages\n* target group: Indo-Iranian languages\n* OPUS readme: iir-iir\n* model: transformer\n* source language(s): asm hin mar urd zza\n* target language(s): asm hin mar urd zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 3.5, chr-F: 0.202\ntestset: URL, BLEU: 12.4, chr-F: 0.014\ntestset: URL, BLEU: 6.2, chr-F: 0.238\ntestset: URL, BLEU: 27.0, chr-F: 0.560\ntestset: URL, BLEU: 21.4, chr-F: 0.507\ntestset: URL, BLEU: 13.4, chr-F: 0.463\ntestset: URL, BLEU: 17.7, chr-F: 0.460\ntestset: URL, BLEU: 13.4, chr-F: 0.363\ntestset: URL, BLEU: 5.3, chr-F: 0.000",
"### System Info:\n\n\n* hf\\_name: iir-iir\n* source\\_languages: iir\n* target\\_languages: iir\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: iir\n* tgt\\_alpha3: iir\n* short\\_pair: iir-iir\n* chrF2\\_score: 0.46\n* bleu: 17.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 4992.0\n* src\\_name: Indo-Iranian languages\n* tgt\\_name: Indo-Iranian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: iir\n* tgt\\_alpha2: iir\n* prefer\\_old: False\n* long\\_pair: iir-iir\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
70,
359,
815
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #ps #os #as #si #iir #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### iir-iir\n\n\n* source group: Indo-Iranian languages\n* target group: Indo-Iranian languages\n* OPUS readme: iir-iir\n* model: transformer\n* source language(s): asm hin mar urd zza\n* target language(s): asm hin mar urd zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 3.5, chr-F: 0.202\ntestset: URL, BLEU: 12.4, chr-F: 0.014\ntestset: URL, BLEU: 6.2, chr-F: 0.238\ntestset: URL, BLEU: 27.0, chr-F: 0.560\ntestset: URL, BLEU: 21.4, chr-F: 0.507\ntestset: URL, BLEU: 13.4, chr-F: 0.463\ntestset: URL, BLEU: 17.7, chr-F: 0.460\ntestset: URL, BLEU: 13.4, chr-F: 0.363\ntestset: URL, BLEU: 5.3, chr-F: 0.000### System Info:\n\n\n* hf\\_name: iir-iir\n* source\\_languages: iir\n* target\\_languages: iir\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'ps', 'os', 'as', 'si', 'iir']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'pes', 'bho', 'kur\\_Arab', 'tgk\\_Cyrl', 'hin', 'kur\\_Latn', 'pes\\_Thaa', 'pus', 'san\\_Deva', 'oss', 'tly\\_Latn', 'jdt\\_Cyrl', 'asm', 'zza', 'rom', 'mai', 'pes\\_Latn', 'awa', 'sin'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: iir\n* tgt\\_alpha3: iir\n* short\\_pair: iir-iir\n* chrF2\\_score: 0.46\n* bleu: 17.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 4992.0\n* src\\_name: Indo-Iranian languages\n* tgt\\_name: Indo-Iranian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: iir\n* tgt\\_alpha2: iir\n* prefer\\_old: False\n* long\\_pair: iir-iir\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ilo-de
* source languages: ilo
* target languages: de
* OPUS readme: [ilo-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ilo-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ilo-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ilo.de | 26.1 | 0.474 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ilo-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ilo",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ilo-de
* source languages: ilo
* target languages: de
* OPUS readme: ilo-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.1, chr-F: 0.474
| [
"### opus-mt-ilo-de\n\n\n* source languages: ilo\n* target languages: de\n* OPUS readme: ilo-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.474"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ilo-de\n\n\n* source languages: ilo\n* target languages: de\n* OPUS readme: ilo-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.474"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ilo-de\n\n\n* source languages: ilo\n* target languages: de\n* OPUS readme: ilo-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.474"
] |
translation | transformers |
### ilo-eng
* source group: Iloko
* target group: English
* OPUS readme: [ilo-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ilo-eng/README.md)
* model: transformer-align
* source language(s): ilo
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ilo.eng | 36.4 | 0.558 |
### System Info:
- hf_name: ilo-eng
- source_languages: ilo
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ilo-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ilo', 'en']
- src_constituents: {'ilo'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ilo-eng/opus-2020-06-16.test.txt
- src_alpha3: ilo
- tgt_alpha3: eng
- short_pair: ilo-en
- chrF2_score: 0.5579999999999999
- bleu: 36.4
- brevity_penalty: 1.0
- ref_len: 7384.0
- src_name: Iloko
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: ilo
- tgt_alpha2: en
- prefer_old: False
- long_pair: ilo-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ilo", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ilo-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ilo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ilo",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ilo-eng
* source group: Iloko
* target group: English
* OPUS readme: ilo-eng
* model: transformer-align
* source language(s): ilo
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.4, chr-F: 0.558
### System Info:
* hf\_name: ilo-eng
* source\_languages: ilo
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ilo', 'en']
* src\_constituents: {'ilo'}
* tgt\_constituents: {'eng'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ilo
* tgt\_alpha3: eng
* short\_pair: ilo-en
* chrF2\_score: 0.5579999999999999
* bleu: 36.4
* brevity\_penalty: 1.0
* ref\_len: 7384.0
* src\_name: Iloko
* tgt\_name: English
* train\_date: 2020-06-16
* src\_alpha2: ilo
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: ilo-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ilo-eng\n\n\n* source group: Iloko\n* target group: English\n* OPUS readme: ilo-eng\n* model: transformer-align\n* source language(s): ilo\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.4, chr-F: 0.558",
"### System Info:\n\n\n* hf\\_name: ilo-eng\n* source\\_languages: ilo\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ilo', 'en']\n* src\\_constituents: {'ilo'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ilo\n* tgt\\_alpha3: eng\n* short\\_pair: ilo-en\n* chrF2\\_score: 0.5579999999999999\n* bleu: 36.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 7384.0\n* src\\_name: Iloko\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: ilo\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ilo-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ilo-eng\n\n\n* source group: Iloko\n* target group: English\n* OPUS readme: ilo-eng\n* model: transformer-align\n* source language(s): ilo\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.4, chr-F: 0.558",
"### System Info:\n\n\n* hf\\_name: ilo-eng\n* source\\_languages: ilo\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ilo', 'en']\n* src\\_constituents: {'ilo'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ilo\n* tgt\\_alpha3: eng\n* short\\_pair: ilo-en\n* chrF2\\_score: 0.5579999999999999\n* bleu: 36.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 7384.0\n* src\\_name: Iloko\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: ilo\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ilo-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
135,
413
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ilo-eng\n\n\n* source group: Iloko\n* target group: English\n* OPUS readme: ilo-eng\n* model: transformer-align\n* source language(s): ilo\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.4, chr-F: 0.558### System Info:\n\n\n* hf\\_name: ilo-eng\n* source\\_languages: ilo\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ilo', 'en']\n* src\\_constituents: {'ilo'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ilo\n* tgt\\_alpha3: eng\n* short\\_pair: ilo-en\n* chrF2\\_score: 0.5579999999999999\n* bleu: 36.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 7384.0\n* src\\_name: Iloko\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: ilo\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ilo-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-ilo-es
* source languages: ilo
* target languages: es
* OPUS readme: [ilo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ilo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ilo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ilo.es | 30.7 | 0.496 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ilo-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ilo",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ilo-es
* source languages: ilo
* target languages: es
* OPUS readme: ilo-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.7, chr-F: 0.496
| [
"### opus-mt-ilo-es\n\n\n* source languages: ilo\n* target languages: es\n* OPUS readme: ilo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.496"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ilo-es\n\n\n* source languages: ilo\n* target languages: es\n* OPUS readme: ilo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.496"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ilo-es\n\n\n* source languages: ilo\n* target languages: es\n* OPUS readme: ilo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.496"
] |
translation | transformers |
### opus-mt-ilo-fi
* source languages: ilo
* target languages: fi
* OPUS readme: [ilo-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ilo-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ilo-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ilo.fi | 27.7 | 0.516 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ilo-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ilo",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ilo-fi
* source languages: ilo
* target languages: fi
* OPUS readme: ilo-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.7, chr-F: 0.516
| [
"### opus-mt-ilo-fi\n\n\n* source languages: ilo\n* target languages: fi\n* OPUS readme: ilo-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.516"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ilo-fi\n\n\n* source languages: ilo\n* target languages: fi\n* OPUS readme: ilo-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.516"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ilo-fi\n\n\n* source languages: ilo\n* target languages: fi\n* OPUS readme: ilo-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.516"
] |
translation | transformers |
### opus-mt-ilo-sv
* source languages: ilo
* target languages: sv
* OPUS readme: [ilo-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ilo-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ilo-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ilo-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ilo.sv | 31.9 | 0.515 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ilo-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ilo",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-ilo-sv
* source languages: ilo
* target languages: sv
* OPUS readme: ilo-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.9, chr-F: 0.515
| [
"### opus-mt-ilo-sv\n\n\n* source languages: ilo\n* target languages: sv\n* OPUS readme: ilo-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.9, chr-F: 0.515"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-ilo-sv\n\n\n* source languages: ilo\n* target languages: sv\n* OPUS readme: ilo-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.9, chr-F: 0.515"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ilo #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ilo-sv\n\n\n* source languages: ilo\n* target languages: sv\n* OPUS readme: ilo-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.9, chr-F: 0.515"
] |
translation | transformers |
### inc-eng
* source group: Indic languages
* target group: English
* OPUS readme: [inc-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/inc-eng/README.md)
* model: transformer
* source language(s): asm awa ben bho gom guj hif_Latn hin mai mar npi ori pan_Guru pnb rom san_Deva sin snd_Arab urd
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 8.9 | 0.341 |
| newsdev2019-engu-gujeng.guj.eng | 8.7 | 0.321 |
| newstest2014-hien-hineng.hin.eng | 13.1 | 0.396 |
| newstest2019-guen-gujeng.guj.eng | 6.5 | 0.290 |
| Tatoeba-test.asm-eng.asm.eng | 18.1 | 0.363 |
| Tatoeba-test.awa-eng.awa.eng | 6.2 | 0.222 |
| Tatoeba-test.ben-eng.ben.eng | 44.7 | 0.595 |
| Tatoeba-test.bho-eng.bho.eng | 29.4 | 0.458 |
| Tatoeba-test.guj-eng.guj.eng | 19.3 | 0.383 |
| Tatoeba-test.hif-eng.hif.eng | 3.7 | 0.220 |
| Tatoeba-test.hin-eng.hin.eng | 38.6 | 0.564 |
| Tatoeba-test.kok-eng.kok.eng | 6.6 | 0.287 |
| Tatoeba-test.lah-eng.lah.eng | 16.0 | 0.272 |
| Tatoeba-test.mai-eng.mai.eng | 75.6 | 0.796 |
| Tatoeba-test.mar-eng.mar.eng | 25.9 | 0.497 |
| Tatoeba-test.multi.eng | 29.0 | 0.502 |
| Tatoeba-test.nep-eng.nep.eng | 4.5 | 0.198 |
| Tatoeba-test.ori-eng.ori.eng | 5.0 | 0.226 |
| Tatoeba-test.pan-eng.pan.eng | 17.4 | 0.375 |
| Tatoeba-test.rom-eng.rom.eng | 1.7 | 0.174 |
| Tatoeba-test.san-eng.san.eng | 5.0 | 0.173 |
| Tatoeba-test.sin-eng.sin.eng | 31.2 | 0.511 |
| Tatoeba-test.snd-eng.snd.eng | 45.7 | 0.670 |
| Tatoeba-test.urd-eng.urd.eng | 25.6 | 0.456 |
### System Info:
- hf_name: inc-eng
- source_languages: inc
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/inc-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc', 'en']
- src_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/inc-eng/opus2m-2020-08-01.test.txt
- src_alpha3: inc
- tgt_alpha3: eng
- short_pair: inc-en
- chrF2_score: 0.502
- bleu: 29.0
- brevity_penalty: 1.0
- ref_len: 64706.0
- src_name: Indic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: inc
- tgt_alpha2: en
- prefer_old: False
- long_pair: inc-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bn", "or", "gu", "mr", "ur", "hi", "as", "si", "inc", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-inc-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"as",
"si",
"inc",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"as",
"si",
"inc",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #as #si #inc #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### inc-eng
* source group: Indic languages
* target group: English
* OPUS readme: inc-eng
* model: transformer
* source language(s): asm awa ben bho gom guj hif\_Latn hin mai mar npi ori pan\_Guru pnb rom san\_Deva sin snd\_Arab urd
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 8.9, chr-F: 0.341
testset: URL, BLEU: 8.7, chr-F: 0.321
testset: URL, BLEU: 13.1, chr-F: 0.396
testset: URL, BLEU: 6.5, chr-F: 0.290
testset: URL, BLEU: 18.1, chr-F: 0.363
testset: URL, BLEU: 6.2, chr-F: 0.222
testset: URL, BLEU: 44.7, chr-F: 0.595
testset: URL, BLEU: 29.4, chr-F: 0.458
testset: URL, BLEU: 19.3, chr-F: 0.383
testset: URL, BLEU: 3.7, chr-F: 0.220
testset: URL, BLEU: 38.6, chr-F: 0.564
testset: URL, BLEU: 6.6, chr-F: 0.287
testset: URL, BLEU: 16.0, chr-F: 0.272
testset: URL, BLEU: 75.6, chr-F: 0.796
testset: URL, BLEU: 25.9, chr-F: 0.497
testset: URL, BLEU: 29.0, chr-F: 0.502
testset: URL, BLEU: 4.5, chr-F: 0.198
testset: URL, BLEU: 5.0, chr-F: 0.226
testset: URL, BLEU: 17.4, chr-F: 0.375
testset: URL, BLEU: 1.7, chr-F: 0.174
testset: URL, BLEU: 5.0, chr-F: 0.173
testset: URL, BLEU: 31.2, chr-F: 0.511
testset: URL, BLEU: 45.7, chr-F: 0.670
testset: URL, BLEU: 25.6, chr-F: 0.456
### System Info:
* hf\_name: inc-eng
* source\_languages: inc
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc', 'en']
* src\_constituents: {'pnb', 'gom', 'ben', 'hif\_Latn', 'ori', 'guj', 'pan\_Guru', 'snd\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: inc
* tgt\_alpha3: eng
* short\_pair: inc-en
* chrF2\_score: 0.502
* bleu: 29.0
* brevity\_penalty: 1.0
* ref\_len: 64706.0
* src\_name: Indic languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: inc
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: inc-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### inc-eng\n\n\n* source group: Indic languages\n* target group: English\n* OPUS readme: inc-eng\n* model: transformer\n* source language(s): asm awa ben bho gom guj hif\\_Latn hin mai mar npi ori pan\\_Guru pnb rom san\\_Deva sin snd\\_Arab urd\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.9, chr-F: 0.341\ntestset: URL, BLEU: 8.7, chr-F: 0.321\ntestset: URL, BLEU: 13.1, chr-F: 0.396\ntestset: URL, BLEU: 6.5, chr-F: 0.290\ntestset: URL, BLEU: 18.1, chr-F: 0.363\ntestset: URL, BLEU: 6.2, chr-F: 0.222\ntestset: URL, BLEU: 44.7, chr-F: 0.595\ntestset: URL, BLEU: 29.4, chr-F: 0.458\ntestset: URL, BLEU: 19.3, chr-F: 0.383\ntestset: URL, BLEU: 3.7, chr-F: 0.220\ntestset: URL, BLEU: 38.6, chr-F: 0.564\ntestset: URL, BLEU: 6.6, chr-F: 0.287\ntestset: URL, BLEU: 16.0, chr-F: 0.272\ntestset: URL, BLEU: 75.6, chr-F: 0.796\ntestset: URL, BLEU: 25.9, chr-F: 0.497\ntestset: URL, BLEU: 29.0, chr-F: 0.502\ntestset: URL, BLEU: 4.5, chr-F: 0.198\ntestset: URL, BLEU: 5.0, chr-F: 0.226\ntestset: URL, BLEU: 17.4, chr-F: 0.375\ntestset: URL, BLEU: 1.7, chr-F: 0.174\ntestset: URL, BLEU: 5.0, chr-F: 0.173\ntestset: URL, BLEU: 31.2, chr-F: 0.511\ntestset: URL, BLEU: 45.7, chr-F: 0.670\ntestset: URL, BLEU: 25.6, chr-F: 0.456",
"### System Info:\n\n\n* hf\\_name: inc-eng\n* source\\_languages: inc\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc', 'en']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: inc\n* tgt\\_alpha3: eng\n* short\\_pair: inc-en\n* chrF2\\_score: 0.502\n* bleu: 29.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 64706.0\n* src\\_name: Indic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: inc\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: inc-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #as #si #inc #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### inc-eng\n\n\n* source group: Indic languages\n* target group: English\n* OPUS readme: inc-eng\n* model: transformer\n* source language(s): asm awa ben bho gom guj hif\\_Latn hin mai mar npi ori pan\\_Guru pnb rom san\\_Deva sin snd\\_Arab urd\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.9, chr-F: 0.341\ntestset: URL, BLEU: 8.7, chr-F: 0.321\ntestset: URL, BLEU: 13.1, chr-F: 0.396\ntestset: URL, BLEU: 6.5, chr-F: 0.290\ntestset: URL, BLEU: 18.1, chr-F: 0.363\ntestset: URL, BLEU: 6.2, chr-F: 0.222\ntestset: URL, BLEU: 44.7, chr-F: 0.595\ntestset: URL, BLEU: 29.4, chr-F: 0.458\ntestset: URL, BLEU: 19.3, chr-F: 0.383\ntestset: URL, BLEU: 3.7, chr-F: 0.220\ntestset: URL, BLEU: 38.6, chr-F: 0.564\ntestset: URL, BLEU: 6.6, chr-F: 0.287\ntestset: URL, BLEU: 16.0, chr-F: 0.272\ntestset: URL, BLEU: 75.6, chr-F: 0.796\ntestset: URL, BLEU: 25.9, chr-F: 0.497\ntestset: URL, BLEU: 29.0, chr-F: 0.502\ntestset: URL, BLEU: 4.5, chr-F: 0.198\ntestset: URL, BLEU: 5.0, chr-F: 0.226\ntestset: URL, BLEU: 17.4, chr-F: 0.375\ntestset: URL, BLEU: 1.7, chr-F: 0.174\ntestset: URL, BLEU: 5.0, chr-F: 0.173\ntestset: URL, BLEU: 31.2, chr-F: 0.511\ntestset: URL, BLEU: 45.7, chr-F: 0.670\ntestset: URL, BLEU: 25.6, chr-F: 0.456",
"### System Info:\n\n\n* hf\\_name: inc-eng\n* source\\_languages: inc\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc', 'en']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: inc\n* tgt\\_alpha3: eng\n* short\\_pair: inc-en\n* chrF2\\_score: 0.502\n* bleu: 29.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 64706.0\n* src\\_name: Indic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: inc\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: inc-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
67,
690,
524
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #as #si #inc #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### inc-eng\n\n\n* source group: Indic languages\n* target group: English\n* OPUS readme: inc-eng\n* model: transformer\n* source language(s): asm awa ben bho gom guj hif\\_Latn hin mai mar npi ori pan\\_Guru pnb rom san\\_Deva sin snd\\_Arab urd\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.9, chr-F: 0.341\ntestset: URL, BLEU: 8.7, chr-F: 0.321\ntestset: URL, BLEU: 13.1, chr-F: 0.396\ntestset: URL, BLEU: 6.5, chr-F: 0.290\ntestset: URL, BLEU: 18.1, chr-F: 0.363\ntestset: URL, BLEU: 6.2, chr-F: 0.222\ntestset: URL, BLEU: 44.7, chr-F: 0.595\ntestset: URL, BLEU: 29.4, chr-F: 0.458\ntestset: URL, BLEU: 19.3, chr-F: 0.383\ntestset: URL, BLEU: 3.7, chr-F: 0.220\ntestset: URL, BLEU: 38.6, chr-F: 0.564\ntestset: URL, BLEU: 6.6, chr-F: 0.287\ntestset: URL, BLEU: 16.0, chr-F: 0.272\ntestset: URL, BLEU: 75.6, chr-F: 0.796\ntestset: URL, BLEU: 25.9, chr-F: 0.497\ntestset: URL, BLEU: 29.0, chr-F: 0.502\ntestset: URL, BLEU: 4.5, chr-F: 0.198\ntestset: URL, BLEU: 5.0, chr-F: 0.226\ntestset: URL, BLEU: 17.4, chr-F: 0.375\ntestset: URL, BLEU: 1.7, chr-F: 0.174\ntestset: URL, BLEU: 5.0, chr-F: 0.173\ntestset: URL, BLEU: 31.2, chr-F: 0.511\ntestset: URL, BLEU: 45.7, chr-F: 0.670\ntestset: URL, BLEU: 25.6, chr-F: 0.456### System Info:\n\n\n* hf\\_name: inc-eng\n* source\\_languages: inc\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc', 'en']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: inc\n* tgt\\_alpha3: eng\n* short\\_pair: inc-en\n* chrF2\\_score: 0.502\n* bleu: 29.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 64706.0\n* src\\_name: Indic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: inc\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: inc-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### inc-inc
* source group: Indic languages
* target group: Indic languages
* OPUS readme: [inc-inc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/inc-inc/README.md)
* model: transformer
* source language(s): asm hin mar urd
* target language(s): asm hin mar urd
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.asm-hin.asm.hin | 2.6 | 0.231 |
| Tatoeba-test.hin-asm.hin.asm | 9.1 | 0.262 |
| Tatoeba-test.hin-mar.hin.mar | 28.1 | 0.548 |
| Tatoeba-test.hin-urd.hin.urd | 19.9 | 0.508 |
| Tatoeba-test.mar-hin.mar.hin | 11.6 | 0.466 |
| Tatoeba-test.multi.multi | 17.1 | 0.464 |
| Tatoeba-test.urd-hin.urd.hin | 13.5 | 0.377 |
### System Info:
- hf_name: inc-inc
- source_languages: inc
- target_languages: inc
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/inc-inc/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']
- src_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}
- tgt_constituents: {'pnb', 'gom', 'ben', 'hif_Latn', 'ori', 'guj', 'pan_Guru', 'snd_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/inc-inc/opus-2020-07-27.test.txt
- src_alpha3: inc
- tgt_alpha3: inc
- short_pair: inc-inc
- chrF2_score: 0.46399999999999997
- bleu: 17.1
- brevity_penalty: 1.0
- ref_len: 4985.0
- src_name: Indic languages
- tgt_name: Indic languages
- train_date: 2020-07-27
- src_alpha2: inc
- tgt_alpha2: inc
- prefer_old: False
- long_pair: inc-inc
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["bn", "or", "gu", "mr", "ur", "hi", "as", "si", "inc"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-inc-inc | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"as",
"si",
"inc",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"bn",
"or",
"gu",
"mr",
"ur",
"hi",
"as",
"si",
"inc"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #as #si #inc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### inc-inc
* source group: Indic languages
* target group: Indic languages
* OPUS readme: inc-inc
* model: transformer
* source language(s): asm hin mar urd
* target language(s): asm hin mar urd
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 2.6, chr-F: 0.231
testset: URL, BLEU: 9.1, chr-F: 0.262
testset: URL, BLEU: 28.1, chr-F: 0.548
testset: URL, BLEU: 19.9, chr-F: 0.508
testset: URL, BLEU: 11.6, chr-F: 0.466
testset: URL, BLEU: 17.1, chr-F: 0.464
testset: URL, BLEU: 13.5, chr-F: 0.377
### System Info:
* hf\_name: inc-inc
* source\_languages: inc
* target\_languages: inc
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']
* src\_constituents: {'pnb', 'gom', 'ben', 'hif\_Latn', 'ori', 'guj', 'pan\_Guru', 'snd\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}
* tgt\_constituents: {'pnb', 'gom', 'ben', 'hif\_Latn', 'ori', 'guj', 'pan\_Guru', 'snd\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: inc
* tgt\_alpha3: inc
* short\_pair: inc-inc
* chrF2\_score: 0.46399999999999997
* bleu: 17.1
* brevity\_penalty: 1.0
* ref\_len: 4985.0
* src\_name: Indic languages
* tgt\_name: Indic languages
* train\_date: 2020-07-27
* src\_alpha2: inc
* tgt\_alpha2: inc
* prefer\_old: False
* long\_pair: inc-inc
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### inc-inc\n\n\n* source group: Indic languages\n* target group: Indic languages\n* OPUS readme: inc-inc\n* model: transformer\n* source language(s): asm hin mar urd\n* target language(s): asm hin mar urd\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.6, chr-F: 0.231\ntestset: URL, BLEU: 9.1, chr-F: 0.262\ntestset: URL, BLEU: 28.1, chr-F: 0.548\ntestset: URL, BLEU: 19.9, chr-F: 0.508\ntestset: URL, BLEU: 11.6, chr-F: 0.466\ntestset: URL, BLEU: 17.1, chr-F: 0.464\ntestset: URL, BLEU: 13.5, chr-F: 0.377",
"### System Info:\n\n\n* hf\\_name: inc-inc\n* source\\_languages: inc\n* target\\_languages: inc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: inc\n* tgt\\_alpha3: inc\n* short\\_pair: inc-inc\n* chrF2\\_score: 0.46399999999999997\n* bleu: 17.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 4985.0\n* src\\_name: Indic languages\n* tgt\\_name: Indic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: inc\n* tgt\\_alpha2: inc\n* prefer\\_old: False\n* long\\_pair: inc-inc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #as #si #inc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### inc-inc\n\n\n* source group: Indic languages\n* target group: Indic languages\n* OPUS readme: inc-inc\n* model: transformer\n* source language(s): asm hin mar urd\n* target language(s): asm hin mar urd\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.6, chr-F: 0.231\ntestset: URL, BLEU: 9.1, chr-F: 0.262\ntestset: URL, BLEU: 28.1, chr-F: 0.548\ntestset: URL, BLEU: 19.9, chr-F: 0.508\ntestset: URL, BLEU: 11.6, chr-F: 0.466\ntestset: URL, BLEU: 17.1, chr-F: 0.464\ntestset: URL, BLEU: 13.5, chr-F: 0.377",
"### System Info:\n\n\n* hf\\_name: inc-inc\n* source\\_languages: inc\n* target\\_languages: inc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: inc\n* tgt\\_alpha3: inc\n* short\\_pair: inc-inc\n* chrF2\\_score: 0.46399999999999997\n* bleu: 17.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 4985.0\n* src\\_name: Indic languages\n* tgt\\_name: Indic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: inc\n* tgt\\_alpha2: inc\n* prefer\\_old: False\n* long\\_pair: inc-inc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
65,
306,
633
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #bn #or #gu #mr #ur #hi #as #si #inc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### inc-inc\n\n\n* source group: Indic languages\n* target group: Indic languages\n* OPUS readme: inc-inc\n* model: transformer\n* source language(s): asm hin mar urd\n* target language(s): asm hin mar urd\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.6, chr-F: 0.231\ntestset: URL, BLEU: 9.1, chr-F: 0.262\ntestset: URL, BLEU: 28.1, chr-F: 0.548\ntestset: URL, BLEU: 19.9, chr-F: 0.508\ntestset: URL, BLEU: 11.6, chr-F: 0.466\ntestset: URL, BLEU: 17.1, chr-F: 0.464\ntestset: URL, BLEU: 13.5, chr-F: 0.377### System Info:\n\n\n* hf\\_name: inc-inc\n* source\\_languages: inc\n* target\\_languages: inc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['bn', 'or', 'gu', 'mr', 'ur', 'hi', 'as', 'si', 'inc']\n* src\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* tgt\\_constituents: {'pnb', 'gom', 'ben', 'hif\\_Latn', 'ori', 'guj', 'pan\\_Guru', 'snd\\_Arab', 'npi', 'mar', 'urd', 'bho', 'hin', 'san\\_Deva', 'asm', 'rom', 'mai', 'awa', 'sin'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: inc\n* tgt\\_alpha3: inc\n* short\\_pair: inc-inc\n* chrF2\\_score: 0.46399999999999997\n* bleu: 17.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 4985.0\n* src\\_name: Indic languages\n* tgt\\_name: Indic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: inc\n* tgt\\_alpha2: inc\n* prefer\\_old: False\n* long\\_pair: inc-inc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ine-eng
* source group: Indo-European languages
* target group: English
* OPUS readme: [ine-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-eng/README.md)
* model: transformer
* source language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 11.2 | 0.375 |
| newsdev2016-enro-roneng.ron.eng | 35.5 | 0.614 |
| newsdev2017-enlv-laveng.lav.eng | 25.1 | 0.542 |
| newsdev2019-engu-gujeng.guj.eng | 16.0 | 0.420 |
| newsdev2019-enlt-liteng.lit.eng | 24.0 | 0.522 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 30.1 | 0.550 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 33.4 | 0.572 |
| newssyscomb2009-ceseng.ces.eng | 24.0 | 0.520 |
| newssyscomb2009-deueng.deu.eng | 25.7 | 0.526 |
| newssyscomb2009-fraeng.fra.eng | 27.9 | 0.550 |
| newssyscomb2009-itaeng.ita.eng | 31.4 | 0.574 |
| newssyscomb2009-spaeng.spa.eng | 28.3 | 0.555 |
| news-test2008-deueng.deu.eng | 24.0 | 0.515 |
| news-test2008-fraeng.fra.eng | 24.5 | 0.524 |
| news-test2008-spaeng.spa.eng | 25.5 | 0.533 |
| newstest2009-ceseng.ces.eng | 23.3 | 0.516 |
| newstest2009-deueng.deu.eng | 23.2 | 0.512 |
| newstest2009-fraeng.fra.eng | 27.3 | 0.545 |
| newstest2009-itaeng.ita.eng | 30.3 | 0.567 |
| newstest2009-spaeng.spa.eng | 27.9 | 0.549 |
| newstest2010-ceseng.ces.eng | 23.8 | 0.523 |
| newstest2010-deueng.deu.eng | 26.2 | 0.545 |
| newstest2010-fraeng.fra.eng | 28.6 | 0.562 |
| newstest2010-spaeng.spa.eng | 31.4 | 0.581 |
| newstest2011-ceseng.ces.eng | 24.2 | 0.521 |
| newstest2011-deueng.deu.eng | 23.9 | 0.522 |
| newstest2011-fraeng.fra.eng | 29.5 | 0.570 |
| newstest2011-spaeng.spa.eng | 30.3 | 0.570 |
| newstest2012-ceseng.ces.eng | 23.5 | 0.516 |
| newstest2012-deueng.deu.eng | 24.9 | 0.529 |
| newstest2012-fraeng.fra.eng | 30.0 | 0.568 |
| newstest2012-ruseng.rus.eng | 29.9 | 0.565 |
| newstest2012-spaeng.spa.eng | 33.3 | 0.593 |
| newstest2013-ceseng.ces.eng | 25.6 | 0.531 |
| newstest2013-deueng.deu.eng | 27.7 | 0.545 |
| newstest2013-fraeng.fra.eng | 30.0 | 0.561 |
| newstest2013-ruseng.rus.eng | 24.4 | 0.514 |
| newstest2013-spaeng.spa.eng | 30.8 | 0.577 |
| newstest2014-csen-ceseng.ces.eng | 27.7 | 0.558 |
| newstest2014-deen-deueng.deu.eng | 27.7 | 0.545 |
| newstest2014-fren-fraeng.fra.eng | 32.2 | 0.592 |
| newstest2014-hien-hineng.hin.eng | 16.7 | 0.450 |
| newstest2014-ruen-ruseng.rus.eng | 27.2 | 0.552 |
| newstest2015-encs-ceseng.ces.eng | 25.4 | 0.518 |
| newstest2015-ende-deueng.deu.eng | 28.8 | 0.552 |
| newstest2015-enru-ruseng.rus.eng | 25.6 | 0.527 |
| newstest2016-encs-ceseng.ces.eng | 27.0 | 0.540 |
| newstest2016-ende-deueng.deu.eng | 33.5 | 0.592 |
| newstest2016-enro-roneng.ron.eng | 32.8 | 0.591 |
| newstest2016-enru-ruseng.rus.eng | 24.8 | 0.523 |
| newstest2017-encs-ceseng.ces.eng | 23.7 | 0.510 |
| newstest2017-ende-deueng.deu.eng | 29.3 | 0.556 |
| newstest2017-enlv-laveng.lav.eng | 18.9 | 0.486 |
| newstest2017-enru-ruseng.rus.eng | 28.0 | 0.546 |
| newstest2018-encs-ceseng.ces.eng | 24.9 | 0.521 |
| newstest2018-ende-deueng.deu.eng | 36.0 | 0.604 |
| newstest2018-enru-ruseng.rus.eng | 23.8 | 0.517 |
| newstest2019-deen-deueng.deu.eng | 31.5 | 0.570 |
| newstest2019-guen-gujeng.guj.eng | 12.1 | 0.377 |
| newstest2019-lten-liteng.lit.eng | 26.6 | 0.555 |
| newstest2019-ruen-ruseng.rus.eng | 27.5 | 0.541 |
| Tatoeba-test.afr-eng.afr.eng | 59.0 | 0.724 |
| Tatoeba-test.ang-eng.ang.eng | 9.9 | 0.254 |
| Tatoeba-test.arg-eng.arg.eng | 41.6 | 0.487 |
| Tatoeba-test.asm-eng.asm.eng | 22.8 | 0.392 |
| Tatoeba-test.ast-eng.ast.eng | 36.1 | 0.521 |
| Tatoeba-test.awa-eng.awa.eng | 11.6 | 0.280 |
| Tatoeba-test.bel-eng.bel.eng | 42.2 | 0.597 |
| Tatoeba-test.ben-eng.ben.eng | 45.8 | 0.598 |
| Tatoeba-test.bho-eng.bho.eng | 34.4 | 0.518 |
| Tatoeba-test.bre-eng.bre.eng | 24.4 | 0.405 |
| Tatoeba-test.bul-eng.bul.eng | 50.8 | 0.660 |
| Tatoeba-test.cat-eng.cat.eng | 51.2 | 0.677 |
| Tatoeba-test.ces-eng.ces.eng | 47.6 | 0.641 |
| Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.214 |
| Tatoeba-test.cos-eng.cos.eng | 61.0 | 0.675 |
| Tatoeba-test.csb-eng.csb.eng | 22.5 | 0.394 |
| Tatoeba-test.cym-eng.cym.eng | 34.7 | 0.522 |
| Tatoeba-test.dan-eng.dan.eng | 56.2 | 0.708 |
| Tatoeba-test.deu-eng.deu.eng | 44.9 | 0.625 |
| Tatoeba-test.dsb-eng.dsb.eng | 21.0 | 0.383 |
| Tatoeba-test.egl-eng.egl.eng | 6.9 | 0.221 |
| Tatoeba-test.ell-eng.ell.eng | 62.1 | 0.741 |
| Tatoeba-test.enm-eng.enm.eng | 22.6 | 0.466 |
| Tatoeba-test.ext-eng.ext.eng | 33.2 | 0.496 |
| Tatoeba-test.fao-eng.fao.eng | 28.1 | 0.460 |
| Tatoeba-test.fas-eng.fas.eng | 9.6 | 0.306 |
| Tatoeba-test.fra-eng.fra.eng | 50.3 | 0.661 |
| Tatoeba-test.frm-eng.frm.eng | 30.0 | 0.457 |
| Tatoeba-test.frr-eng.frr.eng | 15.2 | 0.301 |
| Tatoeba-test.fry-eng.fry.eng | 34.4 | 0.525 |
| Tatoeba-test.gcf-eng.gcf.eng | 18.4 | 0.317 |
| Tatoeba-test.gla-eng.gla.eng | 24.1 | 0.400 |
| Tatoeba-test.gle-eng.gle.eng | 52.2 | 0.671 |
| Tatoeba-test.glg-eng.glg.eng | 50.5 | 0.669 |
| Tatoeba-test.glv-eng.glv.eng | 5.7 | 0.189 |
| Tatoeba-test.gos-eng.gos.eng | 19.2 | 0.378 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.022 |
| Tatoeba-test.grc-eng.grc.eng | 0.9 | 0.095 |
| Tatoeba-test.gsw-eng.gsw.eng | 23.9 | 0.390 |
| Tatoeba-test.guj-eng.guj.eng | 28.0 | 0.428 |
| Tatoeba-test.hat-eng.hat.eng | 44.2 | 0.567 |
| Tatoeba-test.hbs-eng.hbs.eng | 51.6 | 0.666 |
| Tatoeba-test.hif-eng.hif.eng | 22.3 | 0.451 |
| Tatoeba-test.hin-eng.hin.eng | 41.7 | 0.585 |
| Tatoeba-test.hsb-eng.hsb.eng | 46.4 | 0.590 |
| Tatoeba-test.hye-eng.hye.eng | 40.4 | 0.564 |
| Tatoeba-test.isl-eng.isl.eng | 43.8 | 0.605 |
| Tatoeba-test.ita-eng.ita.eng | 60.7 | 0.735 |
| Tatoeba-test.jdt-eng.jdt.eng | 5.5 | 0.091 |
| Tatoeba-test.kok-eng.kok.eng | 7.8 | 0.205 |
| Tatoeba-test.ksh-eng.ksh.eng | 15.8 | 0.284 |
| Tatoeba-test.kur-eng.kur.eng | 11.6 | 0.232 |
| Tatoeba-test.lad-eng.lad.eng | 30.7 | 0.484 |
| Tatoeba-test.lah-eng.lah.eng | 11.0 | 0.286 |
| Tatoeba-test.lat-eng.lat.eng | 24.4 | 0.432 |
| Tatoeba-test.lav-eng.lav.eng | 47.2 | 0.646 |
| Tatoeba-test.lij-eng.lij.eng | 9.0 | 0.287 |
| Tatoeba-test.lit-eng.lit.eng | 51.7 | 0.670 |
| Tatoeba-test.lld-eng.lld.eng | 22.4 | 0.369 |
| Tatoeba-test.lmo-eng.lmo.eng | 26.1 | 0.381 |
| Tatoeba-test.ltz-eng.ltz.eng | 39.8 | 0.536 |
| Tatoeba-test.mai-eng.mai.eng | 72.3 | 0.758 |
| Tatoeba-test.mar-eng.mar.eng | 32.0 | 0.554 |
| Tatoeba-test.mfe-eng.mfe.eng | 63.1 | 0.822 |
| Tatoeba-test.mkd-eng.mkd.eng | 49.5 | 0.638 |
| Tatoeba-test.msa-eng.msa.eng | 38.6 | 0.566 |
| Tatoeba-test.multi.eng | 45.6 | 0.615 |
| Tatoeba-test.mwl-eng.mwl.eng | 40.4 | 0.767 |
| Tatoeba-test.nds-eng.nds.eng | 35.5 | 0.538 |
| Tatoeba-test.nep-eng.nep.eng | 4.9 | 0.209 |
| Tatoeba-test.nld-eng.nld.eng | 54.2 | 0.694 |
| Tatoeba-test.non-eng.non.eng | 39.3 | 0.573 |
| Tatoeba-test.nor-eng.nor.eng | 50.9 | 0.663 |
| Tatoeba-test.oci-eng.oci.eng | 19.6 | 0.386 |
| Tatoeba-test.ori-eng.ori.eng | 16.2 | 0.364 |
| Tatoeba-test.orv-eng.orv.eng | 13.6 | 0.288 |
| Tatoeba-test.oss-eng.oss.eng | 9.4 | 0.301 |
| Tatoeba-test.pan-eng.pan.eng | 17.1 | 0.389 |
| Tatoeba-test.pap-eng.pap.eng | 57.0 | 0.680 |
| Tatoeba-test.pdc-eng.pdc.eng | 41.6 | 0.526 |
| Tatoeba-test.pms-eng.pms.eng | 13.7 | 0.333 |
| Tatoeba-test.pol-eng.pol.eng | 46.5 | 0.632 |
| Tatoeba-test.por-eng.por.eng | 56.4 | 0.710 |
| Tatoeba-test.prg-eng.prg.eng | 2.3 | 0.193 |
| Tatoeba-test.pus-eng.pus.eng | 3.2 | 0.194 |
| Tatoeba-test.roh-eng.roh.eng | 17.5 | 0.420 |
| Tatoeba-test.rom-eng.rom.eng | 5.0 | 0.237 |
| Tatoeba-test.ron-eng.ron.eng | 51.4 | 0.670 |
| Tatoeba-test.rue-eng.rue.eng | 26.0 | 0.447 |
| Tatoeba-test.rus-eng.rus.eng | 47.8 | 0.634 |
| Tatoeba-test.san-eng.san.eng | 4.0 | 0.195 |
| Tatoeba-test.scn-eng.scn.eng | 45.1 | 0.440 |
| Tatoeba-test.sco-eng.sco.eng | 41.9 | 0.582 |
| Tatoeba-test.sgs-eng.sgs.eng | 38.7 | 0.498 |
| Tatoeba-test.sin-eng.sin.eng | 29.7 | 0.499 |
| Tatoeba-test.slv-eng.slv.eng | 38.2 | 0.564 |
| Tatoeba-test.snd-eng.snd.eng | 12.7 | 0.342 |
| Tatoeba-test.spa-eng.spa.eng | 53.2 | 0.687 |
| Tatoeba-test.sqi-eng.sqi.eng | 51.9 | 0.679 |
| Tatoeba-test.stq-eng.stq.eng | 9.0 | 0.391 |
| Tatoeba-test.swe-eng.swe.eng | 57.4 | 0.705 |
| Tatoeba-test.swg-eng.swg.eng | 18.0 | 0.338 |
| Tatoeba-test.tgk-eng.tgk.eng | 24.3 | 0.413 |
| Tatoeba-test.tly-eng.tly.eng | 1.1 | 0.094 |
| Tatoeba-test.ukr-eng.ukr.eng | 48.0 | 0.639 |
| Tatoeba-test.urd-eng.urd.eng | 27.2 | 0.471 |
| Tatoeba-test.vec-eng.vec.eng | 28.0 | 0.398 |
| Tatoeba-test.wln-eng.wln.eng | 17.5 | 0.320 |
| Tatoeba-test.yid-eng.yid.eng | 26.9 | 0.457 |
| Tatoeba-test.zza-eng.zza.eng | 1.7 | 0.131 |
### System Info:
- hf_name: ine-eng
- source_languages: ine
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']
- src_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-eng/opus2m-2020-08-01.test.txt
- src_alpha3: ine
- tgt_alpha3: eng
- short_pair: ine-en
- chrF2_score: 0.615
- bleu: 45.6
- brevity_penalty: 0.997
- ref_len: 71872.0
- src_name: Indo-European languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: ine
- tgt_alpha2: en
- prefer_old: False
- long_pair: ine-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ca", "es", "os", "ro", "fy", "cy", "sc", "is", "yi", "lb", "an", "sq", "fr", "ht", "rm", "ps", "af", "uk", "sl", "lt", "bg", "be", "gd", "si", "en", "br", "mk", "or", "mr", "ru", "fo", "co", "oc", "pl", "gl", "nb", "bn", "id", "hy", "da", "gv", "nl", "pt", "hi", "as", "kw", "ga", "sv", "gu", "wa", "lv", "el", "it", "hr", "ur", "nn", "de", "cs", "ine"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ine-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"es",
"os",
"ro",
"fy",
"cy",
"sc",
"is",
"yi",
"lb",
"an",
"sq",
"fr",
"ht",
"rm",
"ps",
"af",
"uk",
"sl",
"lt",
"bg",
"be",
"gd",
"si",
"en",
"br",
"mk",
"or",
"mr",
"ru",
"fo",
"co",
"oc",
"pl",
"gl",
"nb",
"bn",
"id",
"hy",
"da",
"gv",
"nl",
"pt",
"hi",
"as",
"kw",
"ga",
"sv",
"gu",
"wa",
"lv",
"el",
"it",
"hr",
"ur",
"nn",
"de",
"cs",
"ine",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca",
"es",
"os",
"ro",
"fy",
"cy",
"sc",
"is",
"yi",
"lb",
"an",
"sq",
"fr",
"ht",
"rm",
"ps",
"af",
"uk",
"sl",
"lt",
"bg",
"be",
"gd",
"si",
"en",
"br",
"mk",
"or",
"mr",
"ru",
"fo",
"co",
"oc",
"pl",
"gl",
"nb",
"bn",
"id",
"hy",
"da",
"gv",
"nl",
"pt",
"hi",
"as",
"kw",
"ga",
"sv",
"gu",
"wa",
"lv",
"el",
"it",
"hr",
"ur",
"nn",
"de",
"cs",
"ine"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #os #ro #fy #cy #sc #is #yi #lb #an #sq #fr #ht #rm #ps #af #uk #sl #lt #bg #be #gd #si #en #br #mk #or #mr #ru #fo #co #oc #pl #gl #nb #bn #id #hy #da #gv #nl #pt #hi #as #kw #ga #sv #gu #wa #lv #el #it #hr #ur #nn #de #cs #ine #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ine-eng
* source group: Indo-European languages
* target group: English
* OPUS readme: ine-eng
* model: transformer
* source language(s): afr aln ang\_Latn arg asm ast awa bel bel\_Latn ben bho bos\_Latn bre bul bul\_Latn cat ces cor cos csb\_Latn cym dan deu dsb egl ell enm\_Latn ext fao fra frm\_Latn frr fry gcf\_Latn gla gle glg glv gom gos got\_Goth grc\_Grek gsw guj hat hif\_Latn hin hrv hsb hye ind isl ita jdt\_Cyrl ksh kur\_Arab kur\_Latn lad lad\_Latn lat\_Latn lav lij lit lld\_Latn lmo ltg ltz mai mar max\_Latn mfe min mkd mwl nds nld nno nob nob\_Hebr non\_Latn npi oci ori orv\_Cyrl oss pan\_Guru pap pdc pes pes\_Latn pes\_Thaa pms pnb pol por prg\_Latn pus roh rom ron rue rus san\_Deva scn sco sgs sin slv snd\_Arab spa sqi srp\_Cyrl srp\_Latn stq swe swg tgk\_Cyrl tly\_Latn tmw\_Latn ukr urd vec wln yid zlm\_Latn zsm\_Latn zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 11.2, chr-F: 0.375
testset: URL, BLEU: 35.5, chr-F: 0.614
testset: URL, BLEU: 25.1, chr-F: 0.542
testset: URL, BLEU: 16.0, chr-F: 0.420
testset: URL, BLEU: 24.0, chr-F: 0.522
testset: URL, BLEU: 30.1, chr-F: 0.550
testset: URL, BLEU: 33.4, chr-F: 0.572
testset: URL, BLEU: 24.0, chr-F: 0.520
testset: URL, BLEU: 25.7, chr-F: 0.526
testset: URL, BLEU: 27.9, chr-F: 0.550
testset: URL, BLEU: 31.4, chr-F: 0.574
testset: URL, BLEU: 28.3, chr-F: 0.555
testset: URL, BLEU: 24.0, chr-F: 0.515
testset: URL, BLEU: 24.5, chr-F: 0.524
testset: URL, BLEU: 25.5, chr-F: 0.533
testset: URL, BLEU: 23.3, chr-F: 0.516
testset: URL, BLEU: 23.2, chr-F: 0.512
testset: URL, BLEU: 27.3, chr-F: 0.545
testset: URL, BLEU: 30.3, chr-F: 0.567
testset: URL, BLEU: 27.9, chr-F: 0.549
testset: URL, BLEU: 23.8, chr-F: 0.523
testset: URL, BLEU: 26.2, chr-F: 0.545
testset: URL, BLEU: 28.6, chr-F: 0.562
testset: URL, BLEU: 31.4, chr-F: 0.581
testset: URL, BLEU: 24.2, chr-F: 0.521
testset: URL, BLEU: 23.9, chr-F: 0.522
testset: URL, BLEU: 29.5, chr-F: 0.570
testset: URL, BLEU: 30.3, chr-F: 0.570
testset: URL, BLEU: 23.5, chr-F: 0.516
testset: URL, BLEU: 24.9, chr-F: 0.529
testset: URL, BLEU: 30.0, chr-F: 0.568
testset: URL, BLEU: 29.9, chr-F: 0.565
testset: URL, BLEU: 33.3, chr-F: 0.593
testset: URL, BLEU: 25.6, chr-F: 0.531
testset: URL, BLEU: 27.7, chr-F: 0.545
testset: URL, BLEU: 30.0, chr-F: 0.561
testset: URL, BLEU: 24.4, chr-F: 0.514
testset: URL, BLEU: 30.8, chr-F: 0.577
testset: URL, BLEU: 27.7, chr-F: 0.558
testset: URL, BLEU: 27.7, chr-F: 0.545
testset: URL, BLEU: 32.2, chr-F: 0.592
testset: URL, BLEU: 16.7, chr-F: 0.450
testset: URL, BLEU: 27.2, chr-F: 0.552
testset: URL, BLEU: 25.4, chr-F: 0.518
testset: URL, BLEU: 28.8, chr-F: 0.552
testset: URL, BLEU: 25.6, chr-F: 0.527
testset: URL, BLEU: 27.0, chr-F: 0.540
testset: URL, BLEU: 33.5, chr-F: 0.592
testset: URL, BLEU: 32.8, chr-F: 0.591
testset: URL, BLEU: 24.8, chr-F: 0.523
testset: URL, BLEU: 23.7, chr-F: 0.510
testset: URL, BLEU: 29.3, chr-F: 0.556
testset: URL, BLEU: 18.9, chr-F: 0.486
testset: URL, BLEU: 28.0, chr-F: 0.546
testset: URL, BLEU: 24.9, chr-F: 0.521
testset: URL, BLEU: 36.0, chr-F: 0.604
testset: URL, BLEU: 23.8, chr-F: 0.517
testset: URL, BLEU: 31.5, chr-F: 0.570
testset: URL, BLEU: 12.1, chr-F: 0.377
testset: URL, BLEU: 26.6, chr-F: 0.555
testset: URL, BLEU: 27.5, chr-F: 0.541
testset: URL, BLEU: 59.0, chr-F: 0.724
testset: URL, BLEU: 9.9, chr-F: 0.254
testset: URL, BLEU: 41.6, chr-F: 0.487
testset: URL, BLEU: 22.8, chr-F: 0.392
testset: URL, BLEU: 36.1, chr-F: 0.521
testset: URL, BLEU: 11.6, chr-F: 0.280
testset: URL, BLEU: 42.2, chr-F: 0.597
testset: URL, BLEU: 45.8, chr-F: 0.598
testset: URL, BLEU: 34.4, chr-F: 0.518
testset: URL, BLEU: 24.4, chr-F: 0.405
testset: URL, BLEU: 50.8, chr-F: 0.660
testset: URL, BLEU: 51.2, chr-F: 0.677
testset: URL, BLEU: 47.6, chr-F: 0.641
testset: URL, BLEU: 5.4, chr-F: 0.214
testset: URL, BLEU: 61.0, chr-F: 0.675
testset: URL, BLEU: 22.5, chr-F: 0.394
testset: URL, BLEU: 34.7, chr-F: 0.522
testset: URL, BLEU: 56.2, chr-F: 0.708
testset: URL, BLEU: 44.9, chr-F: 0.625
testset: URL, BLEU: 21.0, chr-F: 0.383
testset: URL, BLEU: 6.9, chr-F: 0.221
testset: URL, BLEU: 62.1, chr-F: 0.741
testset: URL, BLEU: 22.6, chr-F: 0.466
testset: URL, BLEU: 33.2, chr-F: 0.496
testset: URL, BLEU: 28.1, chr-F: 0.460
testset: URL, BLEU: 9.6, chr-F: 0.306
testset: URL, BLEU: 50.3, chr-F: 0.661
testset: URL, BLEU: 30.0, chr-F: 0.457
testset: URL, BLEU: 15.2, chr-F: 0.301
testset: URL, BLEU: 34.4, chr-F: 0.525
testset: URL, BLEU: 18.4, chr-F: 0.317
testset: URL, BLEU: 24.1, chr-F: 0.400
testset: URL, BLEU: 52.2, chr-F: 0.671
testset: URL, BLEU: 50.5, chr-F: 0.669
testset: URL, BLEU: 5.7, chr-F: 0.189
testset: URL, BLEU: 19.2, chr-F: 0.378
testset: URL, BLEU: 0.1, chr-F: 0.022
testset: URL, BLEU: 0.9, chr-F: 0.095
testset: URL, BLEU: 23.9, chr-F: 0.390
testset: URL, BLEU: 28.0, chr-F: 0.428
testset: URL, BLEU: 44.2, chr-F: 0.567
testset: URL, BLEU: 51.6, chr-F: 0.666
testset: URL, BLEU: 22.3, chr-F: 0.451
testset: URL, BLEU: 41.7, chr-F: 0.585
testset: URL, BLEU: 46.4, chr-F: 0.590
testset: URL, BLEU: 40.4, chr-F: 0.564
testset: URL, BLEU: 43.8, chr-F: 0.605
testset: URL, BLEU: 60.7, chr-F: 0.735
testset: URL, BLEU: 5.5, chr-F: 0.091
testset: URL, BLEU: 7.8, chr-F: 0.205
testset: URL, BLEU: 15.8, chr-F: 0.284
testset: URL, BLEU: 11.6, chr-F: 0.232
testset: URL, BLEU: 30.7, chr-F: 0.484
testset: URL, BLEU: 11.0, chr-F: 0.286
testset: URL, BLEU: 24.4, chr-F: 0.432
testset: URL, BLEU: 47.2, chr-F: 0.646
testset: URL, BLEU: 9.0, chr-F: 0.287
testset: URL, BLEU: 51.7, chr-F: 0.670
testset: URL, BLEU: 22.4, chr-F: 0.369
testset: URL, BLEU: 26.1, chr-F: 0.381
testset: URL, BLEU: 39.8, chr-F: 0.536
testset: URL, BLEU: 72.3, chr-F: 0.758
testset: URL, BLEU: 32.0, chr-F: 0.554
testset: URL, BLEU: 63.1, chr-F: 0.822
testset: URL, BLEU: 49.5, chr-F: 0.638
testset: URL, BLEU: 38.6, chr-F: 0.566
testset: URL, BLEU: 45.6, chr-F: 0.615
testset: URL, BLEU: 40.4, chr-F: 0.767
testset: URL, BLEU: 35.5, chr-F: 0.538
testset: URL, BLEU: 4.9, chr-F: 0.209
testset: URL, BLEU: 54.2, chr-F: 0.694
testset: URL, BLEU: 39.3, chr-F: 0.573
testset: URL, BLEU: 50.9, chr-F: 0.663
testset: URL, BLEU: 19.6, chr-F: 0.386
testset: URL, BLEU: 16.2, chr-F: 0.364
testset: URL, BLEU: 13.6, chr-F: 0.288
testset: URL, BLEU: 9.4, chr-F: 0.301
testset: URL, BLEU: 17.1, chr-F: 0.389
testset: URL, BLEU: 57.0, chr-F: 0.680
testset: URL, BLEU: 41.6, chr-F: 0.526
testset: URL, BLEU: 13.7, chr-F: 0.333
testset: URL, BLEU: 46.5, chr-F: 0.632
testset: URL, BLEU: 56.4, chr-F: 0.710
testset: URL, BLEU: 2.3, chr-F: 0.193
testset: URL, BLEU: 3.2, chr-F: 0.194
testset: URL, BLEU: 17.5, chr-F: 0.420
testset: URL, BLEU: 5.0, chr-F: 0.237
testset: URL, BLEU: 51.4, chr-F: 0.670
testset: URL, BLEU: 26.0, chr-F: 0.447
testset: URL, BLEU: 47.8, chr-F: 0.634
testset: URL, BLEU: 4.0, chr-F: 0.195
testset: URL, BLEU: 45.1, chr-F: 0.440
testset: URL, BLEU: 41.9, chr-F: 0.582
testset: URL, BLEU: 38.7, chr-F: 0.498
testset: URL, BLEU: 29.7, chr-F: 0.499
testset: URL, BLEU: 38.2, chr-F: 0.564
testset: URL, BLEU: 12.7, chr-F: 0.342
testset: URL, BLEU: 53.2, chr-F: 0.687
testset: URL, BLEU: 51.9, chr-F: 0.679
testset: URL, BLEU: 9.0, chr-F: 0.391
testset: URL, BLEU: 57.4, chr-F: 0.705
testset: URL, BLEU: 18.0, chr-F: 0.338
testset: URL, BLEU: 24.3, chr-F: 0.413
testset: URL, BLEU: 1.1, chr-F: 0.094
testset: URL, BLEU: 48.0, chr-F: 0.639
testset: URL, BLEU: 27.2, chr-F: 0.471
testset: URL, BLEU: 28.0, chr-F: 0.398
testset: URL, BLEU: 17.5, chr-F: 0.320
testset: URL, BLEU: 26.9, chr-F: 0.457
testset: URL, BLEU: 1.7, chr-F: 0.131
### System Info:
* hf\_name: ine-eng
* source\_languages: ine
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']
* src\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\_Latn', 'lad\_Latn', 'lat\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\_Latn', 'srd', 'gcf\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\_Latn', 'arg', 'pes\_Thaa', 'sqi', 'csb\_Latn', 'fra', 'hat', 'non\_Latn', 'sco', 'pnb', 'roh', 'bul\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\_Latn', 'hsb', 'tly\_Latn', 'bul', 'bel', 'got\_Goth', 'lat\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\_Latn', 'eng', 'bre', 'nob\_Hebr', 'prg\_Latn', 'ang\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\_Arab', 'san\_Deva', 'gos', 'rus', 'fao', 'orv\_Cyrl', 'bel\_Latn', 'cos', 'zza', 'grc\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\_Cyrl', 'hye\_Latn', 'pdc', 'srp\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\_Latn', 'zlm\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\_Latn', 'jdt\_Cyrl', 'gsw', 'glv', 'nld', 'snd\_Arab', 'kur\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\_Latn', 'ksh', 'pan\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\_Latn', 'ces', 'egl', 'vec', 'max\_Latn', 'pes\_Latn', 'ltg', 'nds'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ine
* tgt\_alpha3: eng
* short\_pair: ine-en
* chrF2\_score: 0.615
* bleu: 45.6
* brevity\_penalty: 0.997
* ref\_len: 71872.0
* src\_name: Indo-European languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: ine
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: ine-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ine-eng\n\n\n* source group: Indo-European languages\n* target group: English\n* OPUS readme: ine-eng\n* model: transformer\n* source language(s): afr aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.2, chr-F: 0.375\ntestset: URL, BLEU: 35.5, chr-F: 0.614\ntestset: URL, BLEU: 25.1, chr-F: 0.542\ntestset: URL, BLEU: 16.0, chr-F: 0.420\ntestset: URL, BLEU: 24.0, chr-F: 0.522\ntestset: URL, BLEU: 30.1, chr-F: 0.550\ntestset: URL, BLEU: 33.4, chr-F: 0.572\ntestset: URL, BLEU: 24.0, chr-F: 0.520\ntestset: URL, BLEU: 25.7, chr-F: 0.526\ntestset: URL, BLEU: 27.9, chr-F: 0.550\ntestset: URL, BLEU: 31.4, chr-F: 0.574\ntestset: URL, BLEU: 28.3, chr-F: 0.555\ntestset: URL, BLEU: 24.0, chr-F: 0.515\ntestset: URL, BLEU: 24.5, chr-F: 0.524\ntestset: URL, BLEU: 25.5, chr-F: 0.533\ntestset: URL, BLEU: 23.3, chr-F: 0.516\ntestset: URL, BLEU: 23.2, chr-F: 0.512\ntestset: URL, BLEU: 27.3, chr-F: 0.545\ntestset: URL, BLEU: 30.3, chr-F: 0.567\ntestset: URL, BLEU: 27.9, chr-F: 0.549\ntestset: URL, BLEU: 23.8, chr-F: 0.523\ntestset: URL, BLEU: 26.2, chr-F: 0.545\ntestset: URL, BLEU: 28.6, chr-F: 0.562\ntestset: URL, BLEU: 31.4, chr-F: 0.581\ntestset: URL, BLEU: 24.2, chr-F: 0.521\ntestset: URL, BLEU: 23.9, chr-F: 0.522\ntestset: URL, BLEU: 29.5, chr-F: 0.570\ntestset: URL, BLEU: 30.3, chr-F: 0.570\ntestset: URL, BLEU: 23.5, chr-F: 0.516\ntestset: URL, BLEU: 24.9, chr-F: 0.529\ntestset: URL, BLEU: 30.0, chr-F: 0.568\ntestset: URL, BLEU: 29.9, chr-F: 0.565\ntestset: URL, BLEU: 33.3, chr-F: 0.593\ntestset: URL, BLEU: 25.6, chr-F: 0.531\ntestset: URL, BLEU: 27.7, chr-F: 0.545\ntestset: URL, BLEU: 30.0, chr-F: 0.561\ntestset: URL, BLEU: 24.4, chr-F: 0.514\ntestset: URL, BLEU: 30.8, chr-F: 0.577\ntestset: URL, BLEU: 27.7, chr-F: 0.558\ntestset: URL, BLEU: 27.7, chr-F: 0.545\ntestset: URL, BLEU: 32.2, chr-F: 0.592\ntestset: URL, BLEU: 16.7, chr-F: 0.450\ntestset: URL, BLEU: 27.2, chr-F: 0.552\ntestset: URL, BLEU: 25.4, chr-F: 0.518\ntestset: URL, BLEU: 28.8, chr-F: 0.552\ntestset: URL, BLEU: 25.6, chr-F: 0.527\ntestset: URL, BLEU: 27.0, chr-F: 0.540\ntestset: URL, BLEU: 33.5, chr-F: 0.592\ntestset: URL, BLEU: 32.8, chr-F: 0.591\ntestset: URL, BLEU: 24.8, chr-F: 0.523\ntestset: URL, BLEU: 23.7, chr-F: 0.510\ntestset: URL, BLEU: 29.3, chr-F: 0.556\ntestset: URL, BLEU: 18.9, chr-F: 0.486\ntestset: URL, BLEU: 28.0, chr-F: 0.546\ntestset: URL, BLEU: 24.9, chr-F: 0.521\ntestset: URL, BLEU: 36.0, chr-F: 0.604\ntestset: URL, BLEU: 23.8, chr-F: 0.517\ntestset: URL, BLEU: 31.5, chr-F: 0.570\ntestset: URL, BLEU: 12.1, chr-F: 0.377\ntestset: URL, BLEU: 26.6, chr-F: 0.555\ntestset: URL, BLEU: 27.5, chr-F: 0.541\ntestset: URL, BLEU: 59.0, chr-F: 0.724\ntestset: URL, BLEU: 9.9, chr-F: 0.254\ntestset: URL, BLEU: 41.6, chr-F: 0.487\ntestset: URL, BLEU: 22.8, chr-F: 0.392\ntestset: URL, BLEU: 36.1, chr-F: 0.521\ntestset: URL, BLEU: 11.6, chr-F: 0.280\ntestset: URL, BLEU: 42.2, chr-F: 0.597\ntestset: URL, BLEU: 45.8, chr-F: 0.598\ntestset: URL, BLEU: 34.4, chr-F: 0.518\ntestset: URL, BLEU: 24.4, chr-F: 0.405\ntestset: URL, BLEU: 50.8, chr-F: 0.660\ntestset: URL, BLEU: 51.2, chr-F: 0.677\ntestset: URL, BLEU: 47.6, chr-F: 0.641\ntestset: URL, BLEU: 5.4, chr-F: 0.214\ntestset: URL, BLEU: 61.0, chr-F: 0.675\ntestset: URL, BLEU: 22.5, chr-F: 0.394\ntestset: URL, BLEU: 34.7, chr-F: 0.522\ntestset: URL, BLEU: 56.2, chr-F: 0.708\ntestset: URL, BLEU: 44.9, chr-F: 0.625\ntestset: URL, BLEU: 21.0, chr-F: 0.383\ntestset: URL, BLEU: 6.9, chr-F: 0.221\ntestset: URL, BLEU: 62.1, chr-F: 0.741\ntestset: URL, BLEU: 22.6, chr-F: 0.466\ntestset: URL, BLEU: 33.2, chr-F: 0.496\ntestset: URL, BLEU: 28.1, chr-F: 0.460\ntestset: URL, BLEU: 9.6, chr-F: 0.306\ntestset: URL, BLEU: 50.3, chr-F: 0.661\ntestset: URL, BLEU: 30.0, chr-F: 0.457\ntestset: URL, BLEU: 15.2, chr-F: 0.301\ntestset: URL, BLEU: 34.4, chr-F: 0.525\ntestset: URL, BLEU: 18.4, chr-F: 0.317\ntestset: URL, BLEU: 24.1, chr-F: 0.400\ntestset: URL, BLEU: 52.2, chr-F: 0.671\ntestset: URL, BLEU: 50.5, chr-F: 0.669\ntestset: URL, BLEU: 5.7, chr-F: 0.189\ntestset: URL, BLEU: 19.2, chr-F: 0.378\ntestset: URL, BLEU: 0.1, chr-F: 0.022\ntestset: URL, BLEU: 0.9, chr-F: 0.095\ntestset: URL, BLEU: 23.9, chr-F: 0.390\ntestset: URL, BLEU: 28.0, chr-F: 0.428\ntestset: URL, BLEU: 44.2, chr-F: 0.567\ntestset: URL, BLEU: 51.6, chr-F: 0.666\ntestset: URL, BLEU: 22.3, chr-F: 0.451\ntestset: URL, BLEU: 41.7, chr-F: 0.585\ntestset: URL, BLEU: 46.4, chr-F: 0.590\ntestset: URL, BLEU: 40.4, chr-F: 0.564\ntestset: URL, BLEU: 43.8, chr-F: 0.605\ntestset: URL, BLEU: 60.7, chr-F: 0.735\ntestset: URL, BLEU: 5.5, chr-F: 0.091\ntestset: URL, BLEU: 7.8, chr-F: 0.205\ntestset: URL, BLEU: 15.8, chr-F: 0.284\ntestset: URL, BLEU: 11.6, chr-F: 0.232\ntestset: URL, BLEU: 30.7, chr-F: 0.484\ntestset: URL, BLEU: 11.0, chr-F: 0.286\ntestset: URL, BLEU: 24.4, chr-F: 0.432\ntestset: URL, BLEU: 47.2, chr-F: 0.646\ntestset: URL, BLEU: 9.0, chr-F: 0.287\ntestset: URL, BLEU: 51.7, chr-F: 0.670\ntestset: URL, BLEU: 22.4, chr-F: 0.369\ntestset: URL, BLEU: 26.1, chr-F: 0.381\ntestset: URL, BLEU: 39.8, chr-F: 0.536\ntestset: URL, BLEU: 72.3, chr-F: 0.758\ntestset: URL, BLEU: 32.0, chr-F: 0.554\ntestset: URL, BLEU: 63.1, chr-F: 0.822\ntestset: URL, BLEU: 49.5, chr-F: 0.638\ntestset: URL, BLEU: 38.6, chr-F: 0.566\ntestset: URL, BLEU: 45.6, chr-F: 0.615\ntestset: URL, BLEU: 40.4, chr-F: 0.767\ntestset: URL, BLEU: 35.5, chr-F: 0.538\ntestset: URL, BLEU: 4.9, chr-F: 0.209\ntestset: URL, BLEU: 54.2, chr-F: 0.694\ntestset: URL, BLEU: 39.3, chr-F: 0.573\ntestset: URL, BLEU: 50.9, chr-F: 0.663\ntestset: URL, BLEU: 19.6, chr-F: 0.386\ntestset: URL, BLEU: 16.2, chr-F: 0.364\ntestset: URL, BLEU: 13.6, chr-F: 0.288\ntestset: URL, BLEU: 9.4, chr-F: 0.301\ntestset: URL, BLEU: 17.1, chr-F: 0.389\ntestset: URL, BLEU: 57.0, chr-F: 0.680\ntestset: URL, BLEU: 41.6, chr-F: 0.526\ntestset: URL, BLEU: 13.7, chr-F: 0.333\ntestset: URL, BLEU: 46.5, chr-F: 0.632\ntestset: URL, BLEU: 56.4, chr-F: 0.710\ntestset: URL, BLEU: 2.3, chr-F: 0.193\ntestset: URL, BLEU: 3.2, chr-F: 0.194\ntestset: URL, BLEU: 17.5, chr-F: 0.420\ntestset: URL, BLEU: 5.0, chr-F: 0.237\ntestset: URL, BLEU: 51.4, chr-F: 0.670\ntestset: URL, BLEU: 26.0, chr-F: 0.447\ntestset: URL, BLEU: 47.8, chr-F: 0.634\ntestset: URL, BLEU: 4.0, chr-F: 0.195\ntestset: URL, BLEU: 45.1, chr-F: 0.440\ntestset: URL, BLEU: 41.9, chr-F: 0.582\ntestset: URL, BLEU: 38.7, chr-F: 0.498\ntestset: URL, BLEU: 29.7, chr-F: 0.499\ntestset: URL, BLEU: 38.2, chr-F: 0.564\ntestset: URL, BLEU: 12.7, chr-F: 0.342\ntestset: URL, BLEU: 53.2, chr-F: 0.687\ntestset: URL, BLEU: 51.9, chr-F: 0.679\ntestset: URL, BLEU: 9.0, chr-F: 0.391\ntestset: URL, BLEU: 57.4, chr-F: 0.705\ntestset: URL, BLEU: 18.0, chr-F: 0.338\ntestset: URL, BLEU: 24.3, chr-F: 0.413\ntestset: URL, BLEU: 1.1, chr-F: 0.094\ntestset: URL, BLEU: 48.0, chr-F: 0.639\ntestset: URL, BLEU: 27.2, chr-F: 0.471\ntestset: URL, BLEU: 28.0, chr-F: 0.398\ntestset: URL, BLEU: 17.5, chr-F: 0.320\ntestset: URL, BLEU: 26.9, chr-F: 0.457\ntestset: URL, BLEU: 1.7, chr-F: 0.131",
"### System Info:\n\n\n* hf\\_name: ine-eng\n* source\\_languages: ine\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']\n* src\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ine\n* tgt\\_alpha3: eng\n* short\\_pair: ine-en\n* chrF2\\_score: 0.615\n* bleu: 45.6\n* brevity\\_penalty: 0.997\n* ref\\_len: 71872.0\n* src\\_name: Indo-European languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: ine\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ine-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #os #ro #fy #cy #sc #is #yi #lb #an #sq #fr #ht #rm #ps #af #uk #sl #lt #bg #be #gd #si #en #br #mk #or #mr #ru #fo #co #oc #pl #gl #nb #bn #id #hy #da #gv #nl #pt #hi #as #kw #ga #sv #gu #wa #lv #el #it #hr #ur #nn #de #cs #ine #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ine-eng\n\n\n* source group: Indo-European languages\n* target group: English\n* OPUS readme: ine-eng\n* model: transformer\n* source language(s): afr aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.2, chr-F: 0.375\ntestset: URL, BLEU: 35.5, chr-F: 0.614\ntestset: URL, BLEU: 25.1, chr-F: 0.542\ntestset: URL, BLEU: 16.0, chr-F: 0.420\ntestset: URL, BLEU: 24.0, chr-F: 0.522\ntestset: URL, BLEU: 30.1, chr-F: 0.550\ntestset: URL, BLEU: 33.4, chr-F: 0.572\ntestset: URL, BLEU: 24.0, chr-F: 0.520\ntestset: URL, BLEU: 25.7, chr-F: 0.526\ntestset: URL, BLEU: 27.9, chr-F: 0.550\ntestset: URL, BLEU: 31.4, chr-F: 0.574\ntestset: URL, BLEU: 28.3, chr-F: 0.555\ntestset: URL, BLEU: 24.0, chr-F: 0.515\ntestset: URL, BLEU: 24.5, chr-F: 0.524\ntestset: URL, BLEU: 25.5, chr-F: 0.533\ntestset: URL, BLEU: 23.3, chr-F: 0.516\ntestset: URL, BLEU: 23.2, chr-F: 0.512\ntestset: URL, BLEU: 27.3, chr-F: 0.545\ntestset: URL, BLEU: 30.3, chr-F: 0.567\ntestset: URL, BLEU: 27.9, chr-F: 0.549\ntestset: URL, BLEU: 23.8, chr-F: 0.523\ntestset: URL, BLEU: 26.2, chr-F: 0.545\ntestset: URL, BLEU: 28.6, chr-F: 0.562\ntestset: URL, BLEU: 31.4, chr-F: 0.581\ntestset: URL, BLEU: 24.2, chr-F: 0.521\ntestset: URL, BLEU: 23.9, chr-F: 0.522\ntestset: URL, BLEU: 29.5, chr-F: 0.570\ntestset: URL, BLEU: 30.3, chr-F: 0.570\ntestset: URL, BLEU: 23.5, chr-F: 0.516\ntestset: URL, BLEU: 24.9, chr-F: 0.529\ntestset: URL, BLEU: 30.0, chr-F: 0.568\ntestset: URL, BLEU: 29.9, chr-F: 0.565\ntestset: URL, BLEU: 33.3, chr-F: 0.593\ntestset: URL, BLEU: 25.6, chr-F: 0.531\ntestset: URL, BLEU: 27.7, chr-F: 0.545\ntestset: URL, BLEU: 30.0, chr-F: 0.561\ntestset: URL, BLEU: 24.4, chr-F: 0.514\ntestset: URL, BLEU: 30.8, chr-F: 0.577\ntestset: URL, BLEU: 27.7, chr-F: 0.558\ntestset: URL, BLEU: 27.7, chr-F: 0.545\ntestset: URL, BLEU: 32.2, chr-F: 0.592\ntestset: URL, BLEU: 16.7, chr-F: 0.450\ntestset: URL, BLEU: 27.2, chr-F: 0.552\ntestset: URL, BLEU: 25.4, chr-F: 0.518\ntestset: URL, BLEU: 28.8, chr-F: 0.552\ntestset: URL, BLEU: 25.6, chr-F: 0.527\ntestset: URL, BLEU: 27.0, chr-F: 0.540\ntestset: URL, BLEU: 33.5, chr-F: 0.592\ntestset: URL, BLEU: 32.8, chr-F: 0.591\ntestset: URL, BLEU: 24.8, chr-F: 0.523\ntestset: URL, BLEU: 23.7, chr-F: 0.510\ntestset: URL, BLEU: 29.3, chr-F: 0.556\ntestset: URL, BLEU: 18.9, chr-F: 0.486\ntestset: URL, BLEU: 28.0, chr-F: 0.546\ntestset: URL, BLEU: 24.9, chr-F: 0.521\ntestset: URL, BLEU: 36.0, chr-F: 0.604\ntestset: URL, BLEU: 23.8, chr-F: 0.517\ntestset: URL, BLEU: 31.5, chr-F: 0.570\ntestset: URL, BLEU: 12.1, chr-F: 0.377\ntestset: URL, BLEU: 26.6, chr-F: 0.555\ntestset: URL, BLEU: 27.5, chr-F: 0.541\ntestset: URL, BLEU: 59.0, chr-F: 0.724\ntestset: URL, BLEU: 9.9, chr-F: 0.254\ntestset: URL, BLEU: 41.6, chr-F: 0.487\ntestset: URL, BLEU: 22.8, chr-F: 0.392\ntestset: URL, BLEU: 36.1, chr-F: 0.521\ntestset: URL, BLEU: 11.6, chr-F: 0.280\ntestset: URL, BLEU: 42.2, chr-F: 0.597\ntestset: URL, BLEU: 45.8, chr-F: 0.598\ntestset: URL, BLEU: 34.4, chr-F: 0.518\ntestset: URL, BLEU: 24.4, chr-F: 0.405\ntestset: URL, BLEU: 50.8, chr-F: 0.660\ntestset: URL, BLEU: 51.2, chr-F: 0.677\ntestset: URL, BLEU: 47.6, chr-F: 0.641\ntestset: URL, BLEU: 5.4, chr-F: 0.214\ntestset: URL, BLEU: 61.0, chr-F: 0.675\ntestset: URL, BLEU: 22.5, chr-F: 0.394\ntestset: URL, BLEU: 34.7, chr-F: 0.522\ntestset: URL, BLEU: 56.2, chr-F: 0.708\ntestset: URL, BLEU: 44.9, chr-F: 0.625\ntestset: URL, BLEU: 21.0, chr-F: 0.383\ntestset: URL, BLEU: 6.9, chr-F: 0.221\ntestset: URL, BLEU: 62.1, chr-F: 0.741\ntestset: URL, BLEU: 22.6, chr-F: 0.466\ntestset: URL, BLEU: 33.2, chr-F: 0.496\ntestset: URL, BLEU: 28.1, chr-F: 0.460\ntestset: URL, BLEU: 9.6, chr-F: 0.306\ntestset: URL, BLEU: 50.3, chr-F: 0.661\ntestset: URL, BLEU: 30.0, chr-F: 0.457\ntestset: URL, BLEU: 15.2, chr-F: 0.301\ntestset: URL, BLEU: 34.4, chr-F: 0.525\ntestset: URL, BLEU: 18.4, chr-F: 0.317\ntestset: URL, BLEU: 24.1, chr-F: 0.400\ntestset: URL, BLEU: 52.2, chr-F: 0.671\ntestset: URL, BLEU: 50.5, chr-F: 0.669\ntestset: URL, BLEU: 5.7, chr-F: 0.189\ntestset: URL, BLEU: 19.2, chr-F: 0.378\ntestset: URL, BLEU: 0.1, chr-F: 0.022\ntestset: URL, BLEU: 0.9, chr-F: 0.095\ntestset: URL, BLEU: 23.9, chr-F: 0.390\ntestset: URL, BLEU: 28.0, chr-F: 0.428\ntestset: URL, BLEU: 44.2, chr-F: 0.567\ntestset: URL, BLEU: 51.6, chr-F: 0.666\ntestset: URL, BLEU: 22.3, chr-F: 0.451\ntestset: URL, BLEU: 41.7, chr-F: 0.585\ntestset: URL, BLEU: 46.4, chr-F: 0.590\ntestset: URL, BLEU: 40.4, chr-F: 0.564\ntestset: URL, BLEU: 43.8, chr-F: 0.605\ntestset: URL, BLEU: 60.7, chr-F: 0.735\ntestset: URL, BLEU: 5.5, chr-F: 0.091\ntestset: URL, BLEU: 7.8, chr-F: 0.205\ntestset: URL, BLEU: 15.8, chr-F: 0.284\ntestset: URL, BLEU: 11.6, chr-F: 0.232\ntestset: URL, BLEU: 30.7, chr-F: 0.484\ntestset: URL, BLEU: 11.0, chr-F: 0.286\ntestset: URL, BLEU: 24.4, chr-F: 0.432\ntestset: URL, BLEU: 47.2, chr-F: 0.646\ntestset: URL, BLEU: 9.0, chr-F: 0.287\ntestset: URL, BLEU: 51.7, chr-F: 0.670\ntestset: URL, BLEU: 22.4, chr-F: 0.369\ntestset: URL, BLEU: 26.1, chr-F: 0.381\ntestset: URL, BLEU: 39.8, chr-F: 0.536\ntestset: URL, BLEU: 72.3, chr-F: 0.758\ntestset: URL, BLEU: 32.0, chr-F: 0.554\ntestset: URL, BLEU: 63.1, chr-F: 0.822\ntestset: URL, BLEU: 49.5, chr-F: 0.638\ntestset: URL, BLEU: 38.6, chr-F: 0.566\ntestset: URL, BLEU: 45.6, chr-F: 0.615\ntestset: URL, BLEU: 40.4, chr-F: 0.767\ntestset: URL, BLEU: 35.5, chr-F: 0.538\ntestset: URL, BLEU: 4.9, chr-F: 0.209\ntestset: URL, BLEU: 54.2, chr-F: 0.694\ntestset: URL, BLEU: 39.3, chr-F: 0.573\ntestset: URL, BLEU: 50.9, chr-F: 0.663\ntestset: URL, BLEU: 19.6, chr-F: 0.386\ntestset: URL, BLEU: 16.2, chr-F: 0.364\ntestset: URL, BLEU: 13.6, chr-F: 0.288\ntestset: URL, BLEU: 9.4, chr-F: 0.301\ntestset: URL, BLEU: 17.1, chr-F: 0.389\ntestset: URL, BLEU: 57.0, chr-F: 0.680\ntestset: URL, BLEU: 41.6, chr-F: 0.526\ntestset: URL, BLEU: 13.7, chr-F: 0.333\ntestset: URL, BLEU: 46.5, chr-F: 0.632\ntestset: URL, BLEU: 56.4, chr-F: 0.710\ntestset: URL, BLEU: 2.3, chr-F: 0.193\ntestset: URL, BLEU: 3.2, chr-F: 0.194\ntestset: URL, BLEU: 17.5, chr-F: 0.420\ntestset: URL, BLEU: 5.0, chr-F: 0.237\ntestset: URL, BLEU: 51.4, chr-F: 0.670\ntestset: URL, BLEU: 26.0, chr-F: 0.447\ntestset: URL, BLEU: 47.8, chr-F: 0.634\ntestset: URL, BLEU: 4.0, chr-F: 0.195\ntestset: URL, BLEU: 45.1, chr-F: 0.440\ntestset: URL, BLEU: 41.9, chr-F: 0.582\ntestset: URL, BLEU: 38.7, chr-F: 0.498\ntestset: URL, BLEU: 29.7, chr-F: 0.499\ntestset: URL, BLEU: 38.2, chr-F: 0.564\ntestset: URL, BLEU: 12.7, chr-F: 0.342\ntestset: URL, BLEU: 53.2, chr-F: 0.687\ntestset: URL, BLEU: 51.9, chr-F: 0.679\ntestset: URL, BLEU: 9.0, chr-F: 0.391\ntestset: URL, BLEU: 57.4, chr-F: 0.705\ntestset: URL, BLEU: 18.0, chr-F: 0.338\ntestset: URL, BLEU: 24.3, chr-F: 0.413\ntestset: URL, BLEU: 1.1, chr-F: 0.094\ntestset: URL, BLEU: 48.0, chr-F: 0.639\ntestset: URL, BLEU: 27.2, chr-F: 0.471\ntestset: URL, BLEU: 28.0, chr-F: 0.398\ntestset: URL, BLEU: 17.5, chr-F: 0.320\ntestset: URL, BLEU: 26.9, chr-F: 0.457\ntestset: URL, BLEU: 1.7, chr-F: 0.131",
"### System Info:\n\n\n* hf\\_name: ine-eng\n* source\\_languages: ine\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']\n* src\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ine\n* tgt\\_alpha3: eng\n* short\\_pair: ine-en\n* chrF2\\_score: 0.615\n* bleu: 45.6\n* brevity\\_penalty: 0.997\n* ref\\_len: 71872.0\n* src\\_name: Indo-European languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: ine\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ine-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
178,
4368,
1459
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #os #ro #fy #cy #sc #is #yi #lb #an #sq #fr #ht #rm #ps #af #uk #sl #lt #bg #be #gd #si #en #br #mk #or #mr #ru #fo #co #oc #pl #gl #nb #bn #id #hy #da #gv #nl #pt #hi #as #kw #ga #sv #gu #wa #lv #el #it #hr #ur #nn #de #cs #ine #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ine-eng\n\n\n* source group: Indo-European languages\n* target group: English\n* OPUS readme: ine-eng\n* model: transformer\n* source language(s): afr aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.2, chr-F: 0.375\ntestset: URL, BLEU: 35.5, chr-F: 0.614\ntestset: URL, BLEU: 25.1, chr-F: 0.542\ntestset: URL, BLEU: 16.0, chr-F: 0.420\ntestset: URL, BLEU: 24.0, chr-F: 0.522\ntestset: URL, BLEU: 30.1, chr-F: 0.550\ntestset: URL, BLEU: 33.4, chr-F: 0.572\ntestset: URL, BLEU: 24.0, chr-F: 0.520\ntestset: URL, BLEU: 25.7, chr-F: 0.526\ntestset: URL, BLEU: 27.9, chr-F: 0.550\ntestset: URL, BLEU: 31.4, chr-F: 0.574\ntestset: URL, BLEU: 28.3, chr-F: 0.555\ntestset: URL, BLEU: 24.0, chr-F: 0.515\ntestset: URL, BLEU: 24.5, chr-F: 0.524\ntestset: URL, BLEU: 25.5, chr-F: 0.533\ntestset: URL, BLEU: 23.3, chr-F: 0.516\ntestset: URL, BLEU: 23.2, chr-F: 0.512\ntestset: URL, BLEU: 27.3, chr-F: 0.545\ntestset: URL, BLEU: 30.3, chr-F: 0.567\ntestset: URL, BLEU: 27.9, chr-F: 0.549\ntestset: URL, BLEU: 23.8, chr-F: 0.523\ntestset: URL, BLEU: 26.2, chr-F: 0.545\ntestset: URL, BLEU: 28.6, chr-F: 0.562\ntestset: URL, BLEU: 31.4, chr-F: 0.581\ntestset: URL, BLEU: 24.2, chr-F: 0.521\ntestset: URL, BLEU: 23.9, chr-F: 0.522\ntestset: URL, BLEU: 29.5, chr-F: 0.570\ntestset: URL, BLEU: 30.3, chr-F: 0.570\ntestset: URL, BLEU: 23.5, chr-F: 0.516\ntestset: URL, BLEU: 24.9, chr-F: 0.529\ntestset: URL, BLEU: 30.0, chr-F: 0.568\ntestset: URL, BLEU: 29.9, chr-F: 0.565\ntestset: URL, BLEU: 33.3, chr-F: 0.593\ntestset: URL, BLEU: 25.6, chr-F: 0.531\ntestset: URL, BLEU: 27.7, chr-F: 0.545\ntestset: URL, BLEU: 30.0, chr-F: 0.561\ntestset: URL, BLEU: 24.4, chr-F: 0.514\ntestset: URL, BLEU: 30.8, chr-F: 0.577\ntestset: URL, BLEU: 27.7, chr-F: 0.558\ntestset: URL, BLEU: 27.7, chr-F: 0.545\ntestset: URL, BLEU: 32.2, chr-F: 0.592\ntestset: URL, BLEU: 16.7, chr-F: 0.450\ntestset: URL, BLEU: 27.2, chr-F: 0.552\ntestset: URL, BLEU: 25.4, chr-F: 0.518\ntestset: URL, BLEU: 28.8, chr-F: 0.552\ntestset: URL, BLEU: 25.6, chr-F: 0.527\ntestset: URL, BLEU: 27.0, chr-F: 0.540\ntestset: URL, BLEU: 33.5, chr-F: 0.592\ntestset: URL, BLEU: 32.8, chr-F: 0.591\ntestset: URL, BLEU: 24.8, chr-F: 0.523\ntestset: URL, BLEU: 23.7, chr-F: 0.510\ntestset: URL, BLEU: 29.3, chr-F: 0.556\ntestset: URL, BLEU: 18.9, chr-F: 0.486\ntestset: URL, BLEU: 28.0, chr-F: 0.546\ntestset: URL, BLEU: 24.9, chr-F: 0.521\ntestset: URL, BLEU: 36.0, chr-F: 0.604\ntestset: URL, BLEU: 23.8, chr-F: 0.517\ntestset: URL, BLEU: 31.5, chr-F: 0.570\ntestset: URL, BLEU: 12.1, chr-F: 0.377\ntestset: URL, BLEU: 26.6, chr-F: 0.555\ntestset: URL, BLEU: 27.5, chr-F: 0.541\ntestset: URL, BLEU: 59.0, chr-F: 0.724\ntestset: URL, BLEU: 9.9, chr-F: 0.254\ntestset: URL, BLEU: 41.6, chr-F: 0.487\ntestset: URL, BLEU: 22.8, chr-F: 0.392\ntestset: URL, BLEU: 36.1, chr-F: 0.521\ntestset: URL, BLEU: 11.6, chr-F: 0.280\ntestset: URL, BLEU: 42.2, chr-F: 0.597\ntestset: URL, BLEU: 45.8, chr-F: 0.598\ntestset: URL, BLEU: 34.4, chr-F: 0.518\ntestset: URL, BLEU: 24.4, chr-F: 0.405\ntestset: URL, BLEU: 50.8, chr-F: 0.660\ntestset: URL, BLEU: 51.2, chr-F: 0.677\ntestset: URL, BLEU: 47.6, chr-F: 0.641\ntestset: URL, BLEU: 5.4, chr-F: 0.214\ntestset: URL, BLEU: 61.0, chr-F: 0.675\ntestset: URL, BLEU: 22.5, chr-F: 0.394\ntestset: URL, BLEU: 34.7, chr-F: 0.522\ntestset: URL, BLEU: 56.2, chr-F: 0.708\ntestset: URL, BLEU: 44.9, chr-F: 0.625\ntestset: URL, BLEU: 21.0, chr-F: 0.383\ntestset: URL, BLEU: 6.9, chr-F: 0.221\ntestset: URL, BLEU: 62.1, chr-F: 0.741\ntestset: URL, BLEU: 22.6, chr-F: 0.466\ntestset: URL, BLEU: 33.2, chr-F: 0.496\ntestset: URL, BLEU: 28.1, chr-F: 0.460\ntestset: URL, BLEU: 9.6, chr-F: 0.306\ntestset: URL, BLEU: 50.3, chr-F: 0.661\ntestset: URL, BLEU: 30.0, chr-F: 0.457\ntestset: URL, BLEU: 15.2, chr-F: 0.301\ntestset: URL, BLEU: 34.4, chr-F: 0.525\ntestset: URL, BLEU: 18.4, chr-F: 0.317\ntestset: URL, BLEU: 24.1, chr-F: 0.400\ntestset: URL, BLEU: 52.2, chr-F: 0.671\ntestset: URL, BLEU: 50.5, chr-F: 0.669\ntestset: URL, BLEU: 5.7, chr-F: 0.189\ntestset: URL, BLEU: 19.2, chr-F: 0.378\ntestset: URL, BLEU: 0.1, chr-F: 0.022\ntestset: URL, BLEU: 0.9, chr-F: 0.095\ntestset: URL, BLEU: 23.9, chr-F: 0.390\ntestset: URL, BLEU: 28.0, chr-F: 0.428\ntestset: URL, BLEU: 44.2, chr-F: 0.567\ntestset: URL, BLEU: 51.6, chr-F: 0.666\ntestset: URL, BLEU: 22.3, chr-F: 0.451\ntestset: URL, BLEU: 41.7, chr-F: 0.585\ntestset: URL, BLEU: 46.4, chr-F: 0.590\ntestset: URL, BLEU: 40.4, chr-F: 0.564\ntestset: URL, BLEU: 43.8, chr-F: 0.605\ntestset: URL, BLEU: 60.7, chr-F: 0.735\ntestset: URL, BLEU: 5.5, chr-F: 0.091\ntestset: URL, BLEU: 7.8, chr-F: 0.205\ntestset: URL, BLEU: 15.8, chr-F: 0.284\ntestset: URL, BLEU: 11.6, chr-F: 0.232\ntestset: URL, BLEU: 30.7, chr-F: 0.484\ntestset: URL, BLEU: 11.0, chr-F: 0.286\ntestset: URL, BLEU: 24.4, chr-F: 0.432\ntestset: URL, BLEU: 47.2, chr-F: 0.646\ntestset: URL, BLEU: 9.0, chr-F: 0.287\ntestset: URL, BLEU: 51.7, chr-F: 0.670\ntestset: URL, BLEU: 22.4, chr-F: 0.369\ntestset: URL, BLEU: 26.1, chr-F: 0.381\ntestset: URL, BLEU: 39.8, chr-F: 0.536\ntestset: URL, BLEU: 72.3, chr-F: 0.758\ntestset: URL, BLEU: 32.0, chr-F: 0.554\ntestset: URL, BLEU: 63.1, chr-F: 0.822\ntestset: URL, BLEU: 49.5, chr-F: 0.638\ntestset: URL, BLEU: 38.6, chr-F: 0.566\ntestset: URL, BLEU: 45.6, chr-F: 0.615\ntestset: URL, BLEU: 40.4, chr-F: 0.767\ntestset: URL, BLEU: 35.5, chr-F: 0.538\ntestset: URL, BLEU: 4.9, chr-F: 0.209\ntestset: URL, BLEU: 54.2, chr-F: 0.694\ntestset: URL, BLEU: 39.3, chr-F: 0.573\ntestset: URL, BLEU: 50.9, chr-F: 0.663\ntestset: URL, BLEU: 19.6, chr-F: 0.386\ntestset: URL, BLEU: 16.2, chr-F: 0.364\ntestset: URL, BLEU: 13.6, chr-F: 0.288\ntestset: URL, BLEU: 9.4, chr-F: 0.301\ntestset: URL, BLEU: 17.1, chr-F: 0.389\ntestset: URL, BLEU: 57.0, chr-F: 0.680\ntestset: URL, BLEU: 41.6, chr-F: 0.526\ntestset: URL, BLEU: 13.7, chr-F: 0.333\ntestset: URL, BLEU: 46.5, chr-F: 0.632\ntestset: URL, BLEU: 56.4, chr-F: 0.710\ntestset: URL, BLEU: 2.3, chr-F: 0.193\ntestset: URL, BLEU: 3.2, chr-F: 0.194\ntestset: URL, BLEU: 17.5, chr-F: 0.420\ntestset: URL, BLEU: 5.0, chr-F: 0.237\ntestset: URL, BLEU: 51.4, chr-F: 0.670\ntestset: URL, BLEU: 26.0, chr-F: 0.447\ntestset: URL, BLEU: 47.8, chr-F: 0.634\ntestset: URL, BLEU: 4.0, chr-F: 0.195\ntestset: URL, BLEU: 45.1, chr-F: 0.440\ntestset: URL, BLEU: 41.9, chr-F: 0.582\ntestset: URL, BLEU: 38.7, chr-F: 0.498\ntestset: URL, BLEU: 29.7, chr-F: 0.499\ntestset: URL, BLEU: 38.2, chr-F: 0.564\ntestset: URL, BLEU: 12.7, chr-F: 0.342\ntestset: URL, BLEU: 53.2, chr-F: 0.687\ntestset: URL, BLEU: 51.9, chr-F: 0.679\ntestset: URL, BLEU: 9.0, chr-F: 0.391\ntestset: URL, BLEU: 57.4, chr-F: 0.705\ntestset: URL, BLEU: 18.0, chr-F: 0.338\ntestset: URL, BLEU: 24.3, chr-F: 0.413\ntestset: URL, BLEU: 1.1, chr-F: 0.094\ntestset: URL, BLEU: 48.0, chr-F: 0.639\ntestset: URL, BLEU: 27.2, chr-F: 0.471\ntestset: URL, BLEU: 28.0, chr-F: 0.398\ntestset: URL, BLEU: 17.5, chr-F: 0.320\ntestset: URL, BLEU: 26.9, chr-F: 0.457\ntestset: URL, BLEU: 1.7, chr-F: 0.131### System Info:\n\n\n* hf\\_name: ine-eng\n* source\\_languages: ine\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']\n* src\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ine\n* tgt\\_alpha3: eng\n* short\\_pair: ine-en\n* chrF2\\_score: 0.615\n* bleu: 45.6\n* brevity\\_penalty: 0.997\n* ref\\_len: 71872.0\n* src\\_name: Indo-European languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: ine\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: ine-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ine-ine
* source group: Indo-European languages
* target group: Indo-European languages
* OPUS readme: [ine-ine](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-ine/README.md)
* model: transformer
* source language(s): afr afr_Arab aln ang_Latn arg asm ast awa bel bel_Latn ben bho bjn bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell eng enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye hye_Latn ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Grek lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pcd pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus rus_Latn san_Deva scn sco sgs sin slv snd_Arab spa sqi srd srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza
* target language(s): afr afr_Arab aln ang_Latn arg asm ast awa bel bel_Latn ben bho bjn bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell eng enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye hye_Latn ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Grek lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pcd pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus rus_Latn san_Deva scn sco sgs sin slv snd_Arab spa sqi srd srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| euelections_dev2019.de-fr-deufra.deu.fra | 19.2 | 0.482 |
| euelections_dev2019.fr-de-fradeu.fra.deu | 15.8 | 0.470 |
| newsdev2014-enghin.eng.hin | 4.0 | 0.245 |
| newsdev2014-hineng.hin.eng | 6.8 | 0.301 |
| newsdev2016-enro-engron.eng.ron | 17.3 | 0.470 |
| newsdev2016-enro-roneng.ron.eng | 26.0 | 0.534 |
| newsdev2017-enlv-englav.eng.lav | 12.1 | 0.416 |
| newsdev2017-enlv-laveng.lav.eng | 15.9 | 0.443 |
| newsdev2019-engu-engguj.eng.guj | 2.5 | 0.200 |
| newsdev2019-engu-gujeng.guj.eng | 7.1 | 0.302 |
| newsdev2019-enlt-englit.eng.lit | 10.6 | 0.407 |
| newsdev2019-enlt-liteng.lit.eng | 14.9 | 0.428 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 22.6 | 0.507 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 23.5 | 0.495 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 25.1 | 0.528 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 26.4 | 0.517 |
| newssyscomb2009-cesdeu.ces.deu | 13.1 | 0.432 |
| newssyscomb2009-ceseng.ces.eng | 18.4 | 0.463 |
| newssyscomb2009-cesfra.ces.fra | 15.5 | 0.452 |
| newssyscomb2009-cesita.ces.ita | 14.8 | 0.458 |
| newssyscomb2009-cesspa.ces.spa | 18.4 | 0.462 |
| newssyscomb2009-deuces.deu.ces | 10.5 | 0.381 |
| newssyscomb2009-deueng.deu.eng | 19.5 | 0.467 |
| newssyscomb2009-deufra.deu.fra | 16.4 | 0.459 |
| newssyscomb2009-deuita.deu.ita | 15.5 | 0.456 |
| newssyscomb2009-deuspa.deu.spa | 18.4 | 0.466 |
| newssyscomb2009-engces.eng.ces | 11.9 | 0.394 |
| newssyscomb2009-engdeu.eng.deu | 13.9 | 0.446 |
| newssyscomb2009-engfra.eng.fra | 20.7 | 0.502 |
| newssyscomb2009-engita.eng.ita | 21.3 | 0.516 |
| newssyscomb2009-engspa.eng.spa | 22.3 | 0.506 |
| newssyscomb2009-fraces.fra.ces | 11.5 | 0.390 |
| newssyscomb2009-fradeu.fra.deu | 13.4 | 0.437 |
| newssyscomb2009-fraeng.fra.eng | 22.8 | 0.499 |
| newssyscomb2009-fraita.fra.ita | 22.2 | 0.533 |
| newssyscomb2009-fraspa.fra.spa | 26.2 | 0.539 |
| newssyscomb2009-itaces.ita.ces | 12.3 | 0.397 |
| newssyscomb2009-itadeu.ita.deu | 13.3 | 0.436 |
| newssyscomb2009-itaeng.ita.eng | 24.7 | 0.517 |
| newssyscomb2009-itafra.ita.fra | 24.0 | 0.528 |
| newssyscomb2009-itaspa.ita.spa | 26.3 | 0.537 |
| newssyscomb2009-spaces.spa.ces | 12.0 | 0.400 |
| newssyscomb2009-spadeu.spa.deu | 13.9 | 0.440 |
| newssyscomb2009-spaeng.spa.eng | 22.9 | 0.509 |
| newssyscomb2009-spafra.spa.fra | 24.2 | 0.538 |
| newssyscomb2009-spaita.spa.ita | 24.5 | 0.547 |
| news-test2008-cesdeu.ces.deu | 12.0 | 0.422 |
| news-test2008-cesfra.ces.fra | 15.1 | 0.444 |
| news-test2008-cesspa.ces.spa | 16.4 | 0.451 |
| news-test2008-deuces.deu.ces | 9.9 | 0.369 |
| news-test2008-deueng.deu.eng | 18.0 | 0.456 |
| news-test2008-deufra.deu.fra | 16.4 | 0.453 |
| news-test2008-deuspa.deu.spa | 17.0 | 0.452 |
| news-test2008-engces.eng.ces | 10.5 | 0.375 |
| news-test2008-engdeu.eng.deu | 14.5 | 0.439 |
| news-test2008-engfra.eng.fra | 18.9 | 0.481 |
| news-test2008-engspa.eng.spa | 20.9 | 0.491 |
| news-test2008-fraces.fra.ces | 10.7 | 0.380 |
| news-test2008-fradeu.fra.deu | 13.8 | 0.435 |
| news-test2008-fraeng.fra.eng | 19.8 | 0.479 |
| news-test2008-fraspa.fra.spa | 24.8 | 0.522 |
| news-test2008-spaces.spa.ces | 11.0 | 0.380 |
| news-test2008-spadeu.spa.deu | 14.0 | 0.433 |
| news-test2008-spaeng.spa.eng | 20.6 | 0.488 |
| news-test2008-spafra.spa.fra | 23.3 | 0.518 |
| newstest2009-cesdeu.ces.deu | 12.9 | 0.427 |
| newstest2009-ceseng.ces.eng | 17.0 | 0.456 |
| newstest2009-cesfra.ces.fra | 15.4 | 0.447 |
| newstest2009-cesita.ces.ita | 14.9 | 0.454 |
| newstest2009-cesspa.ces.spa | 17.1 | 0.458 |
| newstest2009-deuces.deu.ces | 10.3 | 0.370 |
| newstest2009-deueng.deu.eng | 17.7 | 0.458 |
| newstest2009-deufra.deu.fra | 15.9 | 0.447 |
| newstest2009-deuita.deu.ita | 14.7 | 0.446 |
| newstest2009-deuspa.deu.spa | 17.2 | 0.453 |
| newstest2009-engces.eng.ces | 11.0 | 0.387 |
| newstest2009-engdeu.eng.deu | 13.6 | 0.440 |
| newstest2009-engfra.eng.fra | 20.3 | 0.496 |
| newstest2009-engita.eng.ita | 20.8 | 0.509 |
| newstest2009-engspa.eng.spa | 21.9 | 0.503 |
| newstest2009-fraces.fra.ces | 11.3 | 0.385 |
| newstest2009-fradeu.fra.deu | 14.0 | 0.436 |
| newstest2009-fraeng.fra.eng | 21.8 | 0.496 |
| newstest2009-fraita.fra.ita | 22.1 | 0.526 |
| newstest2009-fraspa.fra.spa | 24.8 | 0.525 |
| newstest2009-itaces.ita.ces | 11.5 | 0.382 |
| newstest2009-itadeu.ita.deu | 13.3 | 0.430 |
| newstest2009-itaeng.ita.eng | 23.6 | 0.508 |
| newstest2009-itafra.ita.fra | 22.9 | 0.516 |
| newstest2009-itaspa.ita.spa | 25.4 | 0.529 |
| newstest2009-spaces.spa.ces | 11.3 | 0.386 |
| newstest2009-spadeu.spa.deu | 13.5 | 0.434 |
| newstest2009-spaeng.spa.eng | 22.4 | 0.500 |
| newstest2009-spafra.spa.fra | 23.2 | 0.520 |
| newstest2009-spaita.spa.ita | 24.0 | 0.538 |
| newstest2010-cesdeu.ces.deu | 13.1 | 0.431 |
| newstest2010-ceseng.ces.eng | 16.9 | 0.459 |
| newstest2010-cesfra.ces.fra | 15.6 | 0.450 |
| newstest2010-cesspa.ces.spa | 18.5 | 0.467 |
| newstest2010-deuces.deu.ces | 11.4 | 0.387 |
| newstest2010-deueng.deu.eng | 19.6 | 0.481 |
| newstest2010-deufra.deu.fra | 17.7 | 0.471 |
| newstest2010-deuspa.deu.spa | 20.0 | 0.478 |
| newstest2010-engces.eng.ces | 11.4 | 0.393 |
| newstest2010-engdeu.eng.deu | 15.1 | 0.448 |
| newstest2010-engfra.eng.fra | 21.4 | 0.506 |
| newstest2010-engspa.eng.spa | 25.0 | 0.525 |
| newstest2010-fraces.fra.ces | 11.1 | 0.386 |
| newstest2010-fradeu.fra.deu | 14.2 | 0.442 |
| newstest2010-fraeng.fra.eng | 22.6 | 0.507 |
| newstest2010-fraspa.fra.spa | 26.6 | 0.542 |
| newstest2010-spaces.spa.ces | 12.2 | 0.396 |
| newstest2010-spadeu.spa.deu | 15.1 | 0.445 |
| newstest2010-spaeng.spa.eng | 24.3 | 0.521 |
| newstest2010-spafra.spa.fra | 24.8 | 0.536 |
| newstest2011-cesdeu.ces.deu | 13.1 | 0.423 |
| newstest2011-ceseng.ces.eng | 18.2 | 0.463 |
| newstest2011-cesfra.ces.fra | 17.4 | 0.458 |
| newstest2011-cesspa.ces.spa | 18.9 | 0.464 |
| newstest2011-deuces.deu.ces | 11.2 | 0.376 |
| newstest2011-deueng.deu.eng | 18.3 | 0.464 |
| newstest2011-deufra.deu.fra | 17.0 | 0.457 |
| newstest2011-deuspa.deu.spa | 19.2 | 0.464 |
| newstest2011-engces.eng.ces | 12.4 | 0.395 |
| newstest2011-engdeu.eng.deu | 14.5 | 0.437 |
| newstest2011-engfra.eng.fra | 23.6 | 0.522 |
| newstest2011-engspa.eng.spa | 26.6 | 0.530 |
| newstest2011-fraces.fra.ces | 12.5 | 0.394 |
| newstest2011-fradeu.fra.deu | 14.2 | 0.433 |
| newstest2011-fraeng.fra.eng | 24.3 | 0.521 |
| newstest2011-fraspa.fra.spa | 29.1 | 0.551 |
| newstest2011-spaces.spa.ces | 12.3 | 0.390 |
| newstest2011-spadeu.spa.deu | 14.4 | 0.435 |
| newstest2011-spaeng.spa.eng | 25.0 | 0.521 |
| newstest2011-spafra.spa.fra | 25.6 | 0.537 |
| newstest2012-cesdeu.ces.deu | 13.1 | 0.420 |
| newstest2012-ceseng.ces.eng | 17.5 | 0.457 |
| newstest2012-cesfra.ces.fra | 16.8 | 0.452 |
| newstest2012-cesrus.ces.rus | 11.2 | 0.379 |
| newstest2012-cesspa.ces.spa | 18.1 | 0.457 |
| newstest2012-deuces.deu.ces | 11.2 | 0.368 |
| newstest2012-deueng.deu.eng | 19.4 | 0.472 |
| newstest2012-deufra.deu.fra | 17.7 | 0.464 |
| newstest2012-deurus.deu.rus | 10.3 | 0.370 |
| newstest2012-deuspa.deu.spa | 19.6 | 0.467 |
| newstest2012-engces.eng.ces | 11.1 | 0.375 |
| newstest2012-engdeu.eng.deu | 14.6 | 0.440 |
| newstest2012-engfra.eng.fra | 22.4 | 0.512 |
| newstest2012-engrus.eng.rus | 17.6 | 0.452 |
| newstest2012-engspa.eng.spa | 26.5 | 0.527 |
| newstest2012-fraces.fra.ces | 11.9 | 0.383 |
| newstest2012-fradeu.fra.deu | 14.6 | 0.437 |
| newstest2012-fraeng.fra.eng | 24.3 | 0.516 |
| newstest2012-frarus.fra.rus | 11.9 | 0.393 |
| newstest2012-fraspa.fra.spa | 28.3 | 0.545 |
| newstest2012-rusces.rus.ces | 9.0 | 0.340 |
| newstest2012-rusdeu.rus.deu | 10.0 | 0.383 |
| newstest2012-ruseng.rus.eng | 22.4 | 0.492 |
| newstest2012-rusfra.rus.fra | 13.3 | 0.427 |
| newstest2012-russpa.rus.spa | 16.6 | 0.437 |
| newstest2012-spaces.spa.ces | 11.9 | 0.381 |
| newstest2012-spadeu.spa.deu | 14.8 | 0.440 |
| newstest2012-spaeng.spa.eng | 26.5 | 0.534 |
| newstest2012-spafra.spa.fra | 25.0 | 0.539 |
| newstest2012-sparus.spa.rus | 12.4 | 0.401 |
| newstest2013-cesdeu.ces.deu | 14.3 | 0.434 |
| newstest2013-ceseng.ces.eng | 18.5 | 0.463 |
| newstest2013-cesfra.ces.fra | 16.6 | 0.444 |
| newstest2013-cesrus.ces.rus | 13.6 | 0.406 |
| newstest2013-cesspa.ces.spa | 18.2 | 0.455 |
| newstest2013-deuces.deu.ces | 11.7 | 0.380 |
| newstest2013-deueng.deu.eng | 20.9 | 0.481 |
| newstest2013-deufra.deu.fra | 18.1 | 0.460 |
| newstest2013-deurus.deu.rus | 11.7 | 0.384 |
| newstest2013-deuspa.deu.spa | 19.4 | 0.463 |
| newstest2013-engces.eng.ces | 12.7 | 0.394 |
| newstest2013-engdeu.eng.deu | 16.7 | 0.455 |
| newstest2013-engfra.eng.fra | 22.7 | 0.499 |
| newstest2013-engrus.eng.rus | 13.3 | 0.408 |
| newstest2013-engspa.eng.spa | 23.6 | 0.506 |
| newstest2013-fraces.fra.ces | 11.8 | 0.379 |
| newstest2013-fradeu.fra.deu | 15.6 | 0.446 |
| newstest2013-fraeng.fra.eng | 23.6 | 0.506 |
| newstest2013-frarus.fra.rus | 12.9 | 0.399 |
| newstest2013-fraspa.fra.spa | 25.3 | 0.519 |
| newstest2013-rusces.rus.ces | 11.6 | 0.376 |
| newstest2013-rusdeu.rus.deu | 12.4 | 0.410 |
| newstest2013-ruseng.rus.eng | 17.8 | 0.448 |
| newstest2013-rusfra.rus.fra | 14.8 | 0.434 |
| newstest2013-russpa.rus.spa | 17.9 | 0.446 |
| newstest2013-spaces.spa.ces | 12.5 | 0.391 |
| newstest2013-spadeu.spa.deu | 15.9 | 0.449 |
| newstest2013-spaeng.spa.eng | 24.0 | 0.518 |
| newstest2013-spafra.spa.fra | 24.3 | 0.522 |
| newstest2013-sparus.spa.rus | 13.9 | 0.411 |
| newstest2014-csen-ceseng.ces.eng | 19.0 | 0.475 |
| newstest2014-deen-deueng.deu.eng | 19.2 | 0.468 |
| newstest2014-fren-fraeng.fra.eng | 23.9 | 0.521 |
| newstest2014-hien-enghin.eng.hin | 5.9 | 0.268 |
| newstest2014-hien-hineng.hin.eng | 8.8 | 0.348 |
| newstest2014-ruen-ruseng.rus.eng | 19.1 | 0.475 |
| newstest2015-encs-ceseng.ces.eng | 17.9 | 0.450 |
| newstest2015-encs-engces.eng.ces | 12.1 | 0.392 |
| newstest2015-ende-deueng.deu.eng | 21.1 | 0.480 |
| newstest2015-ende-engdeu.eng.deu | 18.7 | 0.475 |
| newstest2015-enru-engrus.eng.rus | 15.4 | 0.431 |
| newstest2015-enru-ruseng.rus.eng | 18.1 | 0.454 |
| newstest2016-encs-ceseng.ces.eng | 18.6 | 0.465 |
| newstest2016-encs-engces.eng.ces | 13.3 | 0.403 |
| newstest2016-ende-deueng.deu.eng | 24.0 | 0.508 |
| newstest2016-ende-engdeu.eng.deu | 21.4 | 0.494 |
| newstest2016-enro-engron.eng.ron | 16.8 | 0.457 |
| newstest2016-enro-roneng.ron.eng | 24.9 | 0.522 |
| newstest2016-enru-engrus.eng.rus | 13.7 | 0.417 |
| newstest2016-enru-ruseng.rus.eng | 17.3 | 0.453 |
| newstest2017-encs-ceseng.ces.eng | 16.7 | 0.444 |
| newstest2017-encs-engces.eng.ces | 10.9 | 0.375 |
| newstest2017-ende-deueng.deu.eng | 21.5 | 0.484 |
| newstest2017-ende-engdeu.eng.deu | 17.5 | 0.464 |
| newstest2017-enlv-englav.eng.lav | 9.1 | 0.388 |
| newstest2017-enlv-laveng.lav.eng | 11.5 | 0.404 |
| newstest2017-enru-engrus.eng.rus | 14.8 | 0.432 |
| newstest2017-enru-ruseng.rus.eng | 19.3 | 0.467 |
| newstest2018-encs-ceseng.ces.eng | 17.1 | 0.450 |
| newstest2018-encs-engces.eng.ces | 10.9 | 0.380 |
| newstest2018-ende-deueng.deu.eng | 26.0 | 0.518 |
| newstest2018-ende-engdeu.eng.deu | 24.3 | 0.514 |
| newstest2018-enru-engrus.eng.rus | 12.5 | 0.417 |
| newstest2018-enru-ruseng.rus.eng | 16.4 | 0.443 |
| newstest2019-csde-cesdeu.ces.deu | 13.9 | 0.432 |
| newstest2019-decs-deuces.deu.ces | 11.7 | 0.383 |
| newstest2019-deen-deueng.deu.eng | 22.2 | 0.483 |
| newstest2019-defr-deufra.deu.fra | 20.1 | 0.496 |
| newstest2019-encs-engces.eng.ces | 12.3 | 0.389 |
| newstest2019-ende-engdeu.eng.deu | 22.0 | 0.497 |
| newstest2019-engu-engguj.eng.guj | 3.1 | 0.208 |
| newstest2019-enlt-englit.eng.lit | 7.8 | 0.369 |
| newstest2019-enru-engrus.eng.rus | 14.6 | 0.408 |
| newstest2019-frde-fradeu.fra.deu | 16.4 | 0.483 |
| newstest2019-guen-gujeng.guj.eng | 6.1 | 0.288 |
| newstest2019-lten-liteng.lit.eng | 16.9 | 0.456 |
| newstest2019-ruen-ruseng.rus.eng | 20.2 | 0.468 |
| Tatoeba-test.afr-ang.afr.ang | 16.0 | 0.152 |
| Tatoeba-test.afr-ces.afr.ces | 10.2 | 0.333 |
| Tatoeba-test.afr-dan.afr.dan | 32.6 | 0.651 |
| Tatoeba-test.afr-deu.afr.deu | 34.5 | 0.556 |
| Tatoeba-test.afr-eng.afr.eng | 48.1 | 0.638 |
| Tatoeba-test.afr-enm.afr.enm | 10.2 | 0.416 |
| Tatoeba-test.afr-fra.afr.fra | 41.9 | 0.612 |
| Tatoeba-test.afr-fry.afr.fry | 0.0 | 0.112 |
| Tatoeba-test.afr-gos.afr.gos | 0.3 | 0.068 |
| Tatoeba-test.afr-isl.afr.isl | 12.2 | 0.419 |
| Tatoeba-test.afr-ita.afr.ita | 48.7 | 0.637 |
| Tatoeba-test.afr-lat.afr.lat | 8.4 | 0.407 |
| Tatoeba-test.afr-ltz.afr.ltz | 19.0 | 0.357 |
| Tatoeba-test.afr-mkd.afr.mkd | 0.0 | 0.238 |
| Tatoeba-test.afr-msa.afr.msa | 1.4 | 0.080 |
| Tatoeba-test.afr-nld.afr.nld | 45.7 | 0.643 |
| Tatoeba-test.afr-nor.afr.nor | 55.3 | 0.687 |
| Tatoeba-test.afr-pol.afr.pol | 39.3 | 0.563 |
| Tatoeba-test.afr-por.afr.por | 33.9 | 0.586 |
| Tatoeba-test.afr-ron.afr.ron | 22.6 | 0.475 |
| Tatoeba-test.afr-rus.afr.rus | 32.1 | 0.525 |
| Tatoeba-test.afr-spa.afr.spa | 44.1 | 0.611 |
| Tatoeba-test.afr-swe.afr.swe | 71.6 | 0.814 |
| Tatoeba-test.afr-ukr.afr.ukr | 31.0 | 0.481 |
| Tatoeba-test.afr-yid.afr.yid | 100.0 | 1.000 |
| Tatoeba-test.ang-afr.ang.afr | 0.0 | 0.133 |
| Tatoeba-test.ang-ces.ang.ces | 5.5 | 0.129 |
| Tatoeba-test.ang-dan.ang.dan | 22.2 | 0.345 |
| Tatoeba-test.ang-deu.ang.deu | 6.3 | 0.251 |
| Tatoeba-test.ang-eng.ang.eng | 7.9 | 0.255 |
| Tatoeba-test.ang-enm.ang.enm | 0.8 | 0.133 |
| Tatoeba-test.ang-fao.ang.fao | 16.0 | 0.086 |
| Tatoeba-test.ang-fra.ang.fra | 6.0 | 0.185 |
| Tatoeba-test.ang-gos.ang.gos | 0.6 | 0.000 |
| Tatoeba-test.ang-isl.ang.isl | 16.0 | 0.102 |
| Tatoeba-test.ang-ita.ang.ita | 13.2 | 0.301 |
| Tatoeba-test.ang-kur.ang.kur | 7.6 | 0.062 |
| Tatoeba-test.ang-lad.ang.lad | 0.2 | 0.025 |
| Tatoeba-test.ang-lat.ang.lat | 6.6 | 0.198 |
| Tatoeba-test.ang-ltz.ang.ltz | 5.5 | 0.121 |
| Tatoeba-test.ang-por.ang.por | 11.4 | 0.498 |
| Tatoeba-test.ang-rus.ang.rus | 2.4 | 0.103 |
| Tatoeba-test.ang-spa.ang.spa | 8.1 | 0.249 |
| Tatoeba-test.ang-ukr.ang.ukr | 16.4 | 0.195 |
| Tatoeba-test.ang-yid.ang.yid | 1.1 | 0.117 |
| Tatoeba-test.arg-eng.arg.eng | 28.2 | 0.394 |
| Tatoeba-test.arg-fra.arg.fra | 39.8 | 0.445 |
| Tatoeba-test.arg-spa.arg.spa | 52.3 | 0.608 |
| Tatoeba-test.asm-dan.asm.dan | 8.6 | 0.261 |
| Tatoeba-test.asm-deu.asm.deu | 19.2 | 0.629 |
| Tatoeba-test.asm-eng.asm.eng | 18.2 | 0.369 |
| Tatoeba-test.asm-fra.asm.fra | 4.3 | 0.145 |
| Tatoeba-test.asm-hin.asm.hin | 4.5 | 0.366 |
| Tatoeba-test.asm-ita.asm.ita | 12.1 | 0.310 |
| Tatoeba-test.asm-zza.asm.zza | 8.1 | 0.050 |
| Tatoeba-test.ast-deu.ast.deu | 30.1 | 0.463 |
| Tatoeba-test.ast-eng.ast.eng | 27.6 | 0.441 |
| Tatoeba-test.ast-fra.ast.fra | 29.4 | 0.501 |
| Tatoeba-test.ast-gos.ast.gos | 2.6 | 0.030 |
| Tatoeba-test.ast-nds.ast.nds | 10.0 | 0.280 |
| Tatoeba-test.ast-nld.ast.nld | 100.0 | 1.000 |
| Tatoeba-test.ast-por.ast.por | 100.0 | 1.000 |
| Tatoeba-test.ast-rus.ast.rus | 35.9 | 0.682 |
| Tatoeba-test.ast-spa.ast.spa | 41.7 | 0.601 |
| Tatoeba-test.awa-eng.awa.eng | 2.4 | 0.201 |
| Tatoeba-test.bel-bul.bel.bul | 53.7 | 0.808 |
| Tatoeba-test.bel-ces.bel.ces | 27.6 | 0.483 |
| Tatoeba-test.bel-cym.bel.cym | 32.6 | 0.449 |
| Tatoeba-test.bel-dan.bel.dan | 29.1 | 0.506 |
| Tatoeba-test.bel-deu.bel.deu | 29.5 | 0.522 |
| Tatoeba-test.bel-eng.bel.eng | 31.8 | 0.512 |
| Tatoeba-test.bel-fra.bel.fra | 30.9 | 0.527 |
| Tatoeba-test.bel-hbs.bel.hbs | 39.3 | 0.608 |
| Tatoeba-test.bel-ita.bel.ita | 32.8 | 0.540 |
| Tatoeba-test.bel-kur.bel.kur | 12.7 | 0.178 |
| Tatoeba-test.bel-lad.bel.lad | 4.5 | 0.185 |
| Tatoeba-test.bel-lat.bel.lat | 3.7 | 0.251 |
| Tatoeba-test.bel-mkd.bel.mkd | 19.3 | 0.531 |
| Tatoeba-test.bel-msa.bel.msa | 1.0 | 0.147 |
| Tatoeba-test.bel-nld.bel.nld | 27.1 | 0.481 |
| Tatoeba-test.bel-nor.bel.nor | 37.0 | 0.494 |
| Tatoeba-test.bel-pol.bel.pol | 34.8 | 0.565 |
| Tatoeba-test.bel-por.bel.por | 21.7 | 0.401 |
| Tatoeba-test.bel-rus.bel.rus | 42.3 | 0.643 |
| Tatoeba-test.bel-spa.bel.spa | 28.2 | 0.534 |
| Tatoeba-test.bel-ukr.bel.ukr | 41.6 | 0.643 |
| Tatoeba-test.bel-yid.bel.yid | 2.9 | 0.254 |
| Tatoeba-test.ben-deu.ben.deu | 34.6 | 0.408 |
| Tatoeba-test.ben-eng.ben.eng | 26.5 | 0.430 |
| Tatoeba-test.ben-fra.ben.fra | 21.6 | 0.466 |
| Tatoeba-test.ben-ita.ben.ita | 26.8 | 0.424 |
| Tatoeba-test.ben-spa.ben.spa | 28.9 | 0.473 |
| Tatoeba-test.bho-eng.bho.eng | 21.0 | 0.384 |
| Tatoeba-test.bho-fra.bho.fra | 100.0 | 1.000 |
| Tatoeba-test.bre-ces.bre.ces | 2.2 | 0.178 |
| Tatoeba-test.bre-deu.bre.deu | 7.7 | 0.296 |
| Tatoeba-test.bre-eng.bre.eng | 13.6 | 0.309 |
| Tatoeba-test.bre-fra.bre.fra | 8.6 | 0.251 |
| Tatoeba-test.bre-ita.bre.ita | 12.2 | 0.272 |
| Tatoeba-test.bre-msa.bre.msa | 0.9 | 0.081 |
| Tatoeba-test.bre-nld.bre.nld | 3.0 | 0.217 |
| Tatoeba-test.bre-nor.bre.nor | 1.4 | 0.158 |
| Tatoeba-test.bul-bel.bul.bel | 14.1 | 0.582 |
| Tatoeba-test.bul-ces.bul.ces | 52.8 | 0.725 |
| Tatoeba-test.bul-dan.bul.dan | 66.9 | 0.951 |
| Tatoeba-test.bul-deu.bul.deu | 31.2 | 0.530 |
| Tatoeba-test.bul-ell.bul.ell | 29.1 | 0.497 |
| Tatoeba-test.bul-eng.bul.eng | 36.5 | 0.547 |
| Tatoeba-test.bul-enm.bul.enm | 5.3 | 0.299 |
| Tatoeba-test.bul-fas.bul.fas | 8.9 | 0.511 |
| Tatoeba-test.bul-fra.bul.fra | 36.1 | 0.558 |
| Tatoeba-test.bul-hbs.bul.hbs | 100.0 | 1.000 |
| Tatoeba-test.bul-ita.bul.ita | 24.5 | 0.479 |
| Tatoeba-test.bul-lad.bul.lad | 8.1 | 0.302 |
| Tatoeba-test.bul-lat.bul.lat | 13.4 | 0.337 |
| Tatoeba-test.bul-mkd.bul.mkd | 38.2 | 0.811 |
| Tatoeba-test.bul-msa.bul.msa | 15.0 | 0.431 |
| Tatoeba-test.bul-nld.bul.nld | 31.8 | 0.505 |
| Tatoeba-test.bul-nor.bul.nor | 66.9 | 0.951 |
| Tatoeba-test.bul-pol.bul.pol | 24.4 | 0.461 |
| Tatoeba-test.bul-por.bul.por | 29.2 | 0.484 |
| Tatoeba-test.bul-ron.bul.ron | 42.7 | 0.776 |
| Tatoeba-test.bul-rus.bul.rus | 28.7 | 0.522 |
| Tatoeba-test.bul-spa.bul.spa | 32.1 | 0.520 |
| Tatoeba-test.bul-swe.bul.swe | 66.9 | 0.611 |
| Tatoeba-test.bul-ukr.bul.ukr | 34.3 | 0.567 |
| Tatoeba-test.bul-yid.bul.yid | 13.7 | 0.163 |
| Tatoeba-test.cat-deu.cat.deu | 31.0 | 0.523 |
| Tatoeba-test.cat-ell.cat.ell | 17.0 | 0.423 |
| Tatoeba-test.cat-eng.cat.eng | 39.4 | 0.582 |
| Tatoeba-test.cat-enm.cat.enm | 5.3 | 0.370 |
| Tatoeba-test.cat-fao.cat.fao | 16.0 | 0.301 |
| Tatoeba-test.cat-fra.cat.fra | 41.0 | 0.606 |
| Tatoeba-test.cat-ita.cat.ita | 39.8 | 0.626 |
| Tatoeba-test.cat-nld.cat.nld | 35.9 | 0.555 |
| Tatoeba-test.cat-pol.cat.pol | 23.0 | 0.456 |
| Tatoeba-test.cat-por.cat.por | 38.9 | 0.618 |
| Tatoeba-test.cat-ron.cat.ron | 16.0 | 0.311 |
| Tatoeba-test.cat-rus.cat.rus | 28.8 | 0.507 |
| Tatoeba-test.cat-spa.cat.spa | 55.2 | 0.731 |
| Tatoeba-test.cat-swe.cat.swe | 100.0 | 1.000 |
| Tatoeba-test.cat-ukr.cat.ukr | 30.8 | 0.512 |
| Tatoeba-test.cat-yid.cat.yid | 100.0 | 1.000 |
| Tatoeba-test.ces-afr.ces.afr | 17.0 | 0.426 |
| Tatoeba-test.ces-ang.ces.ang | 3.3 | 0.165 |
| Tatoeba-test.ces-bel.ces.bel | 23.3 | 0.466 |
| Tatoeba-test.ces-bre.ces.bre | 0.7 | 0.126 |
| Tatoeba-test.ces-bul.ces.bul | 45.2 | 0.690 |
| Tatoeba-test.ces-cor.ces.cor | 3.4 | 0.072 |
| Tatoeba-test.ces-dan.ces.dan | 12.7 | 0.706 |
| Tatoeba-test.ces-deu.ces.deu | 32.2 | 0.526 |
| Tatoeba-test.ces-ell.ces.ell | 24.4 | 0.422 |
| Tatoeba-test.ces-eng.ces.eng | 33.8 | 0.529 |
| Tatoeba-test.ces-enm.ces.enm | 1.7 | 0.157 |
| Tatoeba-test.ces-fao.ces.fao | 3.7 | 0.252 |
| Tatoeba-test.ces-fas.ces.fas | 20.1 | 0.229 |
| Tatoeba-test.ces-fra.ces.fra | 36.9 | 0.564 |
| Tatoeba-test.ces-fry.ces.fry | 7.7 | 0.338 |
| Tatoeba-test.ces-grc.ces.grc | 0.6 | 0.011 |
| Tatoeba-test.ces-hbs.ces.hbs | 39.7 | 0.580 |
| Tatoeba-test.ces-hsb.ces.hsb | 7.0 | 0.230 |
| Tatoeba-test.ces-ita.ces.ita | 28.2 | 0.516 |
| Tatoeba-test.ces-lad.ces.lad | 1.7 | 0.303 |
| Tatoeba-test.ces-lat.ces.lat | 6.5 | 0.304 |
| Tatoeba-test.ces-ltz.ces.ltz | 6.6 | 0.202 |
| Tatoeba-test.ces-mkd.ces.mkd | 31.4 | 0.586 |
| Tatoeba-test.ces-msa.ces.msa | 6.4 | 0.312 |
| Tatoeba-test.ces-nds.ces.nds | 19.9 | 0.468 |
| Tatoeba-test.ces-nld.ces.nld | 35.1 | 0.535 |
| Tatoeba-test.ces-nor.ces.nor | 41.7 | 0.610 |
| Tatoeba-test.ces-pol.ces.pol | 30.5 | 0.530 |
| Tatoeba-test.ces-por.ces.por | 33.0 | 0.533 |
| Tatoeba-test.ces-ron.ces.ron | 9.9 | 0.406 |
| Tatoeba-test.ces-rus.ces.rus | 36.9 | 0.564 |
| Tatoeba-test.ces-slv.ces.slv | 4.1 | 0.236 |
| Tatoeba-test.ces-spa.ces.spa | 33.3 | 0.531 |
| Tatoeba-test.ces-swe.ces.swe | 51.4 | 0.586 |
| Tatoeba-test.ces-swg.ces.swg | 4.8 | 0.118 |
| Tatoeba-test.ces-ukr.ces.ukr | 34.6 | 0.522 |
| Tatoeba-test.ces-yid.ces.yid | 2.1 | 0.252 |
| Tatoeba-test.cor-ces.cor.ces | 8.9 | 0.233 |
| Tatoeba-test.cor-cym.cor.cym | 6.7 | 0.205 |
| Tatoeba-test.cor-deu.cor.deu | 4.8 | 0.211 |
| Tatoeba-test.cor-ell.cor.ell | 3.4 | 0.182 |
| Tatoeba-test.cor-eng.cor.eng | 4.4 | 0.193 |
| Tatoeba-test.cor-fra.cor.fra | 5.0 | 0.221 |
| Tatoeba-test.cor-ita.cor.ita | 6.6 | 0.211 |
| Tatoeba-test.cor-nld.cor.nld | 9.3 | 0.221 |
| Tatoeba-test.cor-nor.cor.nor | 19.6 | 0.282 |
| Tatoeba-test.cor-pol.cor.pol | 2.9 | 0.171 |
| Tatoeba-test.cor-por.cor.por | 4.3 | 0.187 |
| Tatoeba-test.cor-rus.cor.rus | 2.4 | 0.154 |
| Tatoeba-test.cor-spa.cor.spa | 3.6 | 0.187 |
| Tatoeba-test.cos-deu.cos.deu | 0.0 | 0.877 |
| Tatoeba-test.cos-eng.cos.eng | 39.2 | 0.473 |
| Tatoeba-test.cos-fra.cos.fra | 19.0 | 0.352 |
| Tatoeba-test.cos-pms.cos.pms | 1.6 | 0.066 |
| Tatoeba-test.csb-deu.csb.deu | 17.5 | 0.336 |
| Tatoeba-test.csb-eng.csb.eng | 14.0 | 0.347 |
| Tatoeba-test.csb-spa.csb.spa | 3.8 | 0.278 |
| Tatoeba-test.cym-bel.cym.bel | 100.0 | 1.000 |
| Tatoeba-test.cym-cor.cym.cor | 0.0 | 0.014 |
| Tatoeba-test.cym-deu.cym.deu | 32.6 | 0.507 |
| Tatoeba-test.cym-eng.cym.eng | 33.1 | 0.496 |
| Tatoeba-test.cym-fra.cym.fra | 27.0 | 0.447 |
| Tatoeba-test.cym-gla.cym.gla | 5.7 | 0.223 |
| Tatoeba-test.cym-gle.cym.gle | 13.1 | 0.380 |
| Tatoeba-test.cym-glv.cym.glv | 5.3 | 0.186 |
| Tatoeba-test.cym-ita.cym.ita | 28.3 | 0.498 |
| Tatoeba-test.cym-lat.cym.lat | 3.7 | 0.185 |
| Tatoeba-test.cym-msa.cym.msa | 8.0 | 0.067 |
| Tatoeba-test.cym-nor.cym.nor | 37.5 | 0.603 |
| Tatoeba-test.cym-pol.cym.pol | 37.8 | 0.488 |
| Tatoeba-test.cym-rus.cym.rus | 32.1 | 0.480 |
| Tatoeba-test.cym-spa.cym.spa | 31.6 | 0.523 |
| Tatoeba-test.cym-yid.cym.yid | 4.8 | 0.072 |
| Tatoeba-test.dan-afr.dan.afr | 40.5 | 0.774 |
| Tatoeba-test.dan-ang.dan.ang | 1.2 | 0.066 |
| Tatoeba-test.dan-asm.dan.asm | 13.1 | 0.156 |
| Tatoeba-test.dan-bel.dan.bel | 27.2 | 0.746 |
| Tatoeba-test.dan-bul.dan.bul | 35.4 | 0.529 |
| Tatoeba-test.dan-ces.dan.ces | 19.0 | 0.349 |
| Tatoeba-test.dan-deu.dan.deu | 35.8 | 0.582 |
| Tatoeba-test.dan-ell.dan.ell | 19.0 | 0.337 |
| Tatoeba-test.dan-eng.dan.eng | 43.4 | 0.609 |
| Tatoeba-test.dan-enm.dan.enm | 18.1 | 0.515 |
| Tatoeba-test.dan-fao.dan.fao | 9.7 | 0.162 |
| Tatoeba-test.dan-fas.dan.fas | 14.1 | 0.410 |
| Tatoeba-test.dan-fra.dan.fra | 47.0 | 0.640 |
| Tatoeba-test.dan-gos.dan.gos | 2.6 | 0.195 |
| Tatoeba-test.dan-isl.dan.isl | 12.2 | 0.344 |
| Tatoeba-test.dan-ita.dan.ita | 36.3 | 0.589 |
| Tatoeba-test.dan-kur.dan.kur | 3.5 | 0.270 |
| Tatoeba-test.dan-lad.dan.lad | 0.4 | 0.096 |
| Tatoeba-test.dan-lat.dan.lat | 3.9 | 0.376 |
| Tatoeba-test.dan-lav.dan.lav | 68.7 | 0.786 |
| Tatoeba-test.dan-ltz.dan.ltz | 71.4 | 0.554 |
| Tatoeba-test.dan-mar.dan.mar | 3.7 | 0.220 |
| Tatoeba-test.dan-nds.dan.nds | 4.9 | 0.219 |
| Tatoeba-test.dan-nld.dan.nld | 47.2 | 0.650 |
| Tatoeba-test.dan-nor.dan.nor | 58.8 | 0.749 |
| Tatoeba-test.dan-pol.dan.pol | 27.1 | 0.527 |
| Tatoeba-test.dan-por.dan.por | 41.5 | 0.616 |
| Tatoeba-test.dan-ron.dan.ron | 100.0 | 1.000 |
| Tatoeba-test.dan-rus.dan.rus | 30.8 | 0.518 |
| Tatoeba-test.dan-spa.dan.spa | 36.6 | 0.578 |
| Tatoeba-test.dan-swe.dan.swe | 53.8 | 0.696 |
| Tatoeba-test.dan-swg.dan.swg | 4.8 | 0.184 |
| Tatoeba-test.dan-ukr.dan.ukr | 15.9 | 0.489 |
| Tatoeba-test.dan-urd.dan.urd | 21.7 | 0.544 |
| Tatoeba-test.dan-yid.dan.yid | 13.0 | 0.252 |
| Tatoeba-test.deu-afr.deu.afr | 37.5 | 0.566 |
| Tatoeba-test.deu-ang.deu.ang | 0.6 | 0.131 |
| Tatoeba-test.deu-asm.deu.asm | 20.0 | 0.580 |
| Tatoeba-test.deu-ast.deu.ast | 16.5 | 0.389 |
| Tatoeba-test.deu-bel.deu.bel | 19.6 | 0.450 |
| Tatoeba-test.deu-ben.deu.ben | 34.5 | 0.319 |
| Tatoeba-test.deu-bre.deu.bre | 3.2 | 0.196 |
| Tatoeba-test.deu-bul.deu.bul | 32.6 | 0.517 |
| Tatoeba-test.deu-cat.deu.cat | 28.4 | 0.503 |
| Tatoeba-test.deu-ces.deu.ces | 24.3 | 0.465 |
| Tatoeba-test.deu-cor.deu.cor | 0.2 | 0.043 |
| Tatoeba-test.deu-cos.deu.cos | 2.4 | 0.020 |
| Tatoeba-test.deu-csb.deu.csb | 4.4 | 0.178 |
| Tatoeba-test.deu-cym.deu.cym | 11.3 | 0.378 |
| Tatoeba-test.deu-dan.deu.dan | 37.8 | 0.579 |
| Tatoeba-test.deu-dsb.deu.dsb | 0.1 | 0.082 |
| Tatoeba-test.deu-egl.deu.egl | 3.3 | 0.050 |
| Tatoeba-test.deu-ell.deu.ell | 27.1 | 0.485 |
| Tatoeba-test.deu-eng.deu.eng | 34.7 | 0.539 |
| Tatoeba-test.deu-enm.deu.enm | 6.7 | 0.331 |
| Tatoeba-test.deu-fas.deu.fas | 4.5 | 0.235 |
| Tatoeba-test.deu-fra.deu.fra | 31.9 | 0.527 |
| Tatoeba-test.deu-frr.deu.frr | 0.2 | 0.101 |
| Tatoeba-test.deu-fry.deu.fry | 13.7 | 0.358 |
| Tatoeba-test.deu-gla.deu.gla | 7.2 | 0.304 |
| Tatoeba-test.deu-gle.deu.gle | 8.9 | 0.349 |
| Tatoeba-test.deu-glg.deu.glg | 28.9 | 0.513 |
| Tatoeba-test.deu-gos.deu.gos | 0.7 | 0.157 |
| Tatoeba-test.deu-got.deu.got | 0.2 | 0.010 |
| Tatoeba-test.deu-grc.deu.grc | 0.1 | 0.005 |
| Tatoeba-test.deu-gsw.deu.gsw | 0.2 | 0.073 |
| Tatoeba-test.deu-hbs.deu.hbs | 23.2 | 0.470 |
| Tatoeba-test.deu-hin.deu.hin | 12.5 | 0.367 |
| Tatoeba-test.deu-hsb.deu.hsb | 5.4 | 0.249 |
| Tatoeba-test.deu-hye.deu.hye | 12.9 | 0.263 |
| Tatoeba-test.deu-isl.deu.isl | 16.5 | 0.395 |
| Tatoeba-test.deu-ita.deu.ita | 29.2 | 0.536 |
| Tatoeba-test.deu-ksh.deu.ksh | 0.6 | 0.092 |
| Tatoeba-test.deu-kur.deu.kur | 11.2 | 0.183 |
| Tatoeba-test.deu-lad.deu.lad | 0.3 | 0.112 |
| Tatoeba-test.deu-lat.deu.lat | 6.4 | 0.301 |
| Tatoeba-test.deu-lav.deu.lav | 29.6 | 0.502 |
| Tatoeba-test.deu-lit.deu.lit | 17.4 | 0.445 |
| Tatoeba-test.deu-ltz.deu.ltz | 18.5 | 0.380 |
| Tatoeba-test.deu-mar.deu.mar | 7.9 | 0.245 |
| Tatoeba-test.deu-mkd.deu.mkd | 21.9 | 0.449 |
| Tatoeba-test.deu-msa.deu.msa | 21.9 | 0.478 |
| Tatoeba-test.deu-nds.deu.nds | 13.6 | 0.391 |
| Tatoeba-test.deu-nld.deu.nld | 37.2 | 0.574 |
| Tatoeba-test.deu-nor.deu.nor | 34.5 | 0.562 |
| Tatoeba-test.deu-oci.deu.oci | 4.7 | 0.261 |
| Tatoeba-test.deu-orv.deu.orv | 0.2 | 0.006 |
| Tatoeba-test.deu-pdc.deu.pdc | 0.6 | 0.064 |
| Tatoeba-test.deu-pms.deu.pms | 0.2 | 0.064 |
| Tatoeba-test.deu-pol.deu.pol | 23.6 | 0.477 |
| Tatoeba-test.deu-por.deu.por | 25.1 | 0.480 |
| Tatoeba-test.deu-prg.deu.prg | 0.2 | 0.070 |
| Tatoeba-test.deu-roh.deu.roh | 0.2 | 0.059 |
| Tatoeba-test.deu-rom.deu.rom | 5.2 | 0.179 |
| Tatoeba-test.deu-ron.deu.ron | 25.7 | 0.484 |
| Tatoeba-test.deu-rus.deu.rus | 27.1 | 0.494 |
| Tatoeba-test.deu-scn.deu.scn | 1.6 | 0.076 |
| Tatoeba-test.deu-sco.deu.sco | 10.8 | 0.281 |
| Tatoeba-test.deu-slv.deu.slv | 8.1 | 0.251 |
| Tatoeba-test.deu-spa.deu.spa | 31.5 | 0.534 |
| Tatoeba-test.deu-stq.deu.stq | 0.6 | 0.144 |
| Tatoeba-test.deu-swe.deu.swe | 39.1 | 0.572 |
| Tatoeba-test.deu-swg.deu.swg | 0.1 | 0.088 |
| Tatoeba-test.deu-tgk.deu.tgk | 13.1 | 0.406 |
| Tatoeba-test.deu-ukr.deu.ukr | 27.2 | 0.489 |
| Tatoeba-test.deu-urd.deu.urd | 13.4 | 0.350 |
| Tatoeba-test.deu-yid.deu.yid | 6.0 | 0.262 |
| Tatoeba-test.dsb-deu.dsb.deu | 14.1 | 0.366 |
| Tatoeba-test.dsb-eng.dsb.eng | 19.0 | 0.424 |
| Tatoeba-test.dsb-nld.dsb.nld | 15.4 | 0.342 |
| Tatoeba-test.dsb-pol.dsb.pol | 15.2 | 0.315 |
| Tatoeba-test.dsb-rus.dsb.rus | 35.4 | 0.394 |
| Tatoeba-test.dsb-spa.dsb.spa | 12.6 | 0.401 |
| Tatoeba-test.egl-deu.egl.deu | 2.9 | 0.168 |
| Tatoeba-test.egl-eng.egl.eng | 5.2 | 0.207 |
| Tatoeba-test.egl-fra.egl.fra | 6.4 | 0.215 |
| Tatoeba-test.egl-ita.egl.ita | 1.6 | 0.180 |
| Tatoeba-test.egl-spa.egl.spa | 3.9 | 0.199 |
| Tatoeba-test.ell-bul.ell.bul | 26.6 | 0.483 |
| Tatoeba-test.ell-cat.ell.cat | 20.2 | 0.398 |
| Tatoeba-test.ell-ces.ell.ces | 12.1 | 0.380 |
| Tatoeba-test.ell-cor.ell.cor | 0.7 | 0.039 |
| Tatoeba-test.ell-dan.ell.dan | 53.7 | 0.513 |
| Tatoeba-test.ell-deu.ell.deu | 30.5 | 0.503 |
| Tatoeba-test.ell-eng.ell.eng | 43.1 | 0.589 |
| Tatoeba-test.ell-enm.ell.enm | 12.7 | 0.541 |
| Tatoeba-test.ell-fas.ell.fas | 5.3 | 0.210 |
| Tatoeba-test.ell-fra.ell.fra | 39.5 | 0.563 |
| Tatoeba-test.ell-glg.ell.glg | 11.6 | 0.343 |
| Tatoeba-test.ell-ita.ell.ita | 30.9 | 0.524 |
| Tatoeba-test.ell-msa.ell.msa | 57.6 | 0.572 |
| Tatoeba-test.ell-nds.ell.nds | 4.9 | 0.244 |
| Tatoeba-test.ell-nld.ell.nld | 38.0 | 0.562 |
| Tatoeba-test.ell-nor.ell.nor | 40.8 | 0.615 |
| Tatoeba-test.ell-pap.ell.pap | 72.6 | 0.846 |
| Tatoeba-test.ell-pol.ell.pol | 26.8 | 0.514 |
| Tatoeba-test.ell-por.ell.por | 27.1 | 0.493 |
| Tatoeba-test.ell-rus.ell.rus | 30.8 | 0.512 |
| Tatoeba-test.ell-spa.ell.spa | 30.8 | 0.475 |
| Tatoeba-test.ell-swe.ell.swe | 36.0 | 0.521 |
| Tatoeba-test.ell-ukr.ell.ukr | 12.6 | 0.364 |
| Tatoeba-test.ell-yid.ell.yid | 100.0 | 1.000 |
| Tatoeba-test.eng-afr.eng.afr | 46.1 | 0.633 |
| Tatoeba-test.eng-ang.eng.ang | 5.1 | 0.136 |
| Tatoeba-test.eng-arg.eng.arg | 5.1 | 0.199 |
| Tatoeba-test.eng-asm.eng.asm | 0.8 | 0.208 |
| Tatoeba-test.eng-ast.eng.ast | 16.8 | 0.380 |
| Tatoeba-test.eng-awa.eng.awa | 0.2 | 0.002 |
| Tatoeba-test.eng-bel.eng.bel | 16.6 | 0.415 |
| Tatoeba-test.eng-ben.eng.ben | 7.0 | 0.321 |
| Tatoeba-test.eng-bho.eng.bho | 0.2 | 0.003 |
| Tatoeba-test.eng-bre.eng.bre | 6.6 | 0.251 |
| Tatoeba-test.eng-bul.eng.bul | 31.5 | 0.513 |
| Tatoeba-test.eng-cat.eng.cat | 33.5 | 0.550 |
| Tatoeba-test.eng-ces.eng.ces | 25.6 | 0.466 |
| Tatoeba-test.eng-cor.eng.cor | 0.1 | 0.035 |
| Tatoeba-test.eng-cos.eng.cos | 0.8 | 0.135 |
| Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.194 |
| Tatoeba-test.eng-cym.eng.cym | 18.8 | 0.422 |
| Tatoeba-test.eng-dan.eng.dan | 41.2 | 0.591 |
| Tatoeba-test.eng-deu.eng.deu | 27.9 | 0.503 |
| Tatoeba-test.eng-dsb.eng.dsb | 0.7 | 0.125 |
| Tatoeba-test.eng-egl.eng.egl | 0.1 | 0.062 |
| Tatoeba-test.eng-ell.eng.ell | 30.7 | 0.540 |
| Tatoeba-test.eng-enm.eng.enm | 4.9 | 0.283 |
| Tatoeba-test.eng-ext.eng.ext | 3.9 | 0.217 |
| Tatoeba-test.eng-fao.eng.fao | 5.9 | 0.276 |
| Tatoeba-test.eng-fas.eng.fas | 4.8 | 0.239 |
| Tatoeba-test.eng-fra.eng.fra | 34.6 | 0.551 |
| Tatoeba-test.eng-frm.eng.frm | 0.2 | 0.099 |
| Tatoeba-test.eng-frr.eng.frr | 5.5 | 0.040 |
| Tatoeba-test.eng-fry.eng.fry | 13.1 | 0.357 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.4 | 0.085 |
| Tatoeba-test.eng-gla.eng.gla | 7.4 | 0.293 |
| Tatoeba-test.eng-gle.eng.gle | 20.0 | 0.415 |
| Tatoeba-test.eng-glg.eng.glg | 29.9 | 0.528 |
| Tatoeba-test.eng-glv.eng.glv | 5.9 | 0.220 |
| Tatoeba-test.eng-gos.eng.gos | 0.5 | 0.137 |
| Tatoeba-test.eng-got.eng.got | 0.1 | 0.009 |
| Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.5 | 0.103 |
| Tatoeba-test.eng-guj.eng.guj | 6.4 | 0.241 |
| Tatoeba-test.eng-hat.eng.hat | 28.2 | 0.460 |
| Tatoeba-test.eng-hbs.eng.hbs | 26.0 | 0.485 |
| Tatoeba-test.eng-hif.eng.hif | 0.8 | 0.228 |
| Tatoeba-test.eng-hin.eng.hin | 11.2 | 0.364 |
| Tatoeba-test.eng-hsb.eng.hsb | 10.6 | 0.277 |
| Tatoeba-test.eng-hye.eng.hye | 10.9 | 0.307 |
| Tatoeba-test.eng-isl.eng.isl | 13.8 | 0.368 |
| Tatoeba-test.eng-ita.eng.ita | 33.8 | 0.571 |
| Tatoeba-test.eng-jdt.eng.jdt | 3.0 | 0.007 |
| Tatoeba-test.eng-kok.eng.kok | 4.8 | 0.005 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.4 | 0.092 |
| Tatoeba-test.eng-kur.eng.kur | 9.0 | 0.174 |
| Tatoeba-test.eng-lad.eng.lad | 0.5 | 0.144 |
| Tatoeba-test.eng-lah.eng.lah | 0.1 | 0.000 |
| Tatoeba-test.eng-lat.eng.lat | 7.7 | 0.333 |
| Tatoeba-test.eng-lav.eng.lav | 25.1 | 0.480 |
| Tatoeba-test.eng-lij.eng.lij | 0.4 | 0.101 |
| Tatoeba-test.eng-lit.eng.lit | 21.0 | 0.492 |
| Tatoeba-test.eng-lld.eng.lld | 0.5 | 0.143 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.5 | 0.135 |
| Tatoeba-test.eng-ltz.eng.ltz | 15.6 | 0.345 |
| Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.251 |
| Tatoeba-test.eng-mar.eng.mar | 9.5 | 0.326 |
| Tatoeba-test.eng-mfe.eng.mfe | 54.1 | 0.747 |
| Tatoeba-test.eng-mkd.eng.mkd | 29.8 | 0.503 |
| Tatoeba-test.eng-msa.eng.msa | 20.0 | 0.449 |
| Tatoeba-test.eng-mwl.eng.mwl | 9.3 | 0.231 |
| Tatoeba-test.eng-nds.eng.nds | 12.2 | 0.357 |
| Tatoeba-test.eng-nep.eng.nep | 0.2 | 0.003 |
| Tatoeba-test.eng-nld.eng.nld | 37.1 | 0.570 |
| Tatoeba-test.eng-non.eng.non | 0.5 | 0.078 |
| Tatoeba-test.eng-nor.eng.nor | 38.4 | 0.575 |
| Tatoeba-test.eng-oci.eng.oci | 4.8 | 0.249 |
| Tatoeba-test.eng-ori.eng.ori | 2.8 | 0.185 |
| Tatoeba-test.eng-orv.eng.orv | 0.1 | 0.011 |
| Tatoeba-test.eng-oss.eng.oss | 2.6 | 0.166 |
| Tatoeba-test.eng-pan.eng.pan | 2.6 | 0.214 |
| Tatoeba-test.eng-pap.eng.pap | 39.8 | 0.566 |
| Tatoeba-test.eng-pdc.eng.pdc | 1.0 | 0.131 |
| Tatoeba-test.eng-pms.eng.pms | 0.9 | 0.124 |
| Tatoeba-test.eng-pol.eng.pol | 26.2 | 0.500 |
| Tatoeba-test.eng-por.eng.por | 31.5 | 0.545 |
| Tatoeba-test.eng-prg.eng.prg | 0.2 | 0.088 |
| Tatoeba-test.eng-pus.eng.pus | 0.4 | 0.108 |
| Tatoeba-test.eng-roh.eng.roh | 1.8 | 0.192 |
| Tatoeba-test.eng-rom.eng.rom | 7.6 | 0.313 |
| Tatoeba-test.eng-ron.eng.ron | 27.6 | 0.508 |
| Tatoeba-test.eng-rue.eng.rue | 0.1 | 0.011 |
| Tatoeba-test.eng-rus.eng.rus | 28.6 | 0.496 |
| Tatoeba-test.eng-san.eng.san | 2.0 | 0.098 |
| Tatoeba-test.eng-scn.eng.scn | 0.9 | 0.080 |
| Tatoeba-test.eng-sco.eng.sco | 24.5 | 0.501 |
| Tatoeba-test.eng-sgs.eng.sgs | 1.3 | 0.105 |
| Tatoeba-test.eng-sin.eng.sin | 3.0 | 0.178 |
| Tatoeba-test.eng-slv.eng.slv | 12.5 | 0.298 |
| Tatoeba-test.eng-snd.eng.snd | 1.7 | 0.214 |
| Tatoeba-test.eng-spa.eng.spa | 36.3 | 0.575 |
| Tatoeba-test.eng-sqi.eng.sqi | 22.1 | 0.459 |
| Tatoeba-test.eng-stq.eng.stq | 5.2 | 0.316 |
| Tatoeba-test.eng-swe.eng.swe | 42.4 | 0.591 |
| Tatoeba-test.eng-swg.eng.swg | 0.6 | 0.145 |
| Tatoeba-test.eng-tgk.eng.tgk | 1.9 | 0.255 |
| Tatoeba-test.eng-tly.eng.tly | 0.3 | 0.054 |
| Tatoeba-test.eng-ukr.eng.ukr | 27.3 | 0.478 |
| Tatoeba-test.eng-urd.eng.urd | 7.0 | 0.310 |
| Tatoeba-test.eng-vec.eng.vec | 0.9 | 0.116 |
| Tatoeba-test.eng-wln.eng.wln | 4.0 | 0.164 |
| Tatoeba-test.eng-yid.eng.yid | 5.9 | 0.260 |
| Tatoeba-test.eng-zza.eng.zza | 0.4 | 0.071 |
| Tatoeba-test.enm-afr.enm.afr | 20.1 | 0.420 |
| Tatoeba-test.enm-ang.enm.ang | 0.6 | 0.057 |
| Tatoeba-test.enm-bul.enm.bul | 22.8 | 0.278 |
| Tatoeba-test.enm-cat.enm.cat | 9.0 | 0.360 |
| Tatoeba-test.enm-ces.enm.ces | 19.0 | 0.324 |
| Tatoeba-test.enm-dan.enm.dan | 35.8 | 0.523 |
| Tatoeba-test.enm-deu.enm.deu | 35.7 | 0.495 |
| Tatoeba-test.enm-ell.enm.ell | 42.7 | 0.644 |
| Tatoeba-test.enm-eng.enm.eng | 22.4 | 0.477 |
| Tatoeba-test.enm-fas.enm.fas | 4.3 | 0.141 |
| Tatoeba-test.enm-fra.enm.fra | 9.0 | 0.345 |
| Tatoeba-test.enm-fry.enm.fry | 16.0 | 0.289 |
| Tatoeba-test.enm-gle.enm.gle | 4.1 | 0.143 |
| Tatoeba-test.enm-gos.enm.gos | 3.0 | 0.247 |
| Tatoeba-test.enm-hbs.enm.hbs | 11.6 | 0.294 |
| Tatoeba-test.enm-isl.enm.isl | 19.0 | 0.220 |
| Tatoeba-test.enm-ita.enm.ita | 4.8 | 0.188 |
| Tatoeba-test.enm-ksh.enm.ksh | 6.1 | 0.136 |
| Tatoeba-test.enm-kur.enm.kur | 16.0 | 0.054 |
| Tatoeba-test.enm-lad.enm.lad | 0.7 | 0.124 |
| Tatoeba-test.enm-lat.enm.lat | 5.4 | 0.238 |
| Tatoeba-test.enm-mwl.enm.mwl | 10.5 | 0.155 |
| Tatoeba-test.enm-nds.enm.nds | 18.6 | 0.427 |
| Tatoeba-test.enm-nld.enm.nld | 38.9 | 0.611 |
| Tatoeba-test.enm-nor.enm.nor | 6.8 | 0.276 |
| Tatoeba-test.enm-oci.enm.oci | 10.5 | 0.138 |
| Tatoeba-test.enm-por.enm.por | 12.7 | 0.088 |
| Tatoeba-test.enm-ron.enm.ron | 7.6 | 0.109 |
| Tatoeba-test.enm-rus.enm.rus | 18.8 | 0.254 |
| Tatoeba-test.enm-spa.enm.spa | 21.4 | 0.339 |
| Tatoeba-test.enm-ukr.enm.ukr | 4.0 | 0.440 |
| Tatoeba-test.enm-yid.enm.yid | 5.3 | 0.231 |
| Tatoeba-test.ext-eng.ext.eng | 24.9 | 0.420 |
| Tatoeba-test.fao-ang.fao.ang | 0.0 | 0.056 |
| Tatoeba-test.fao-cat.fao.cat | 16.0 | 0.171 |
| Tatoeba-test.fao-ces.fao.ces | 2.1 | 0.258 |
| Tatoeba-test.fao-dan.fao.dan | 43.5 | 0.557 |
| Tatoeba-test.fao-eng.fao.eng | 21.3 | 0.402 |
| Tatoeba-test.fao-fra.fao.fra | 3.0 | 0.164 |
| Tatoeba-test.fao-gos.fao.gos | 12.7 | 0.142 |
| Tatoeba-test.fao-isl.fao.isl | 10.5 | 0.131 |
| Tatoeba-test.fao-msa.fao.msa | 0.6 | 0.087 |
| Tatoeba-test.fao-nor.fao.nor | 26.2 | 0.443 |
| Tatoeba-test.fao-pol.fao.pol | 3.6 | 0.176 |
| Tatoeba-test.fao-swe.fao.swe | 0.0 | 0.632 |
| Tatoeba-test.fas-bul.fas.bul | 5.8 | 0.163 |
| Tatoeba-test.fas-ces.fas.ces | 14.5 | 0.104 |
| Tatoeba-test.fas-dan.fas.dan | 53.7 | 0.504 |
| Tatoeba-test.fas-deu.fas.deu | 8.5 | 0.311 |
| Tatoeba-test.fas-ell.fas.ell | 8.7 | 0.259 |
| Tatoeba-test.fas-eng.fas.eng | 10.3 | 0.303 |
| Tatoeba-test.fas-enm.fas.enm | 1.3 | 0.006 |
| Tatoeba-test.fas-fra.fas.fra | 8.6 | 0.331 |
| Tatoeba-test.fas-ita.fas.ita | 7.2 | 0.301 |
| Tatoeba-test.fas-lad.fas.lad | 0.4 | 0.074 |
| Tatoeba-test.fas-lat.fas.lat | 14.4 | 0.256 |
| Tatoeba-test.fas-msa.fas.msa | 9.8 | 0.325 |
| Tatoeba-test.fas-nds.fas.nds | 6.6 | 0.127 |
| Tatoeba-test.fas-nld.fas.nld | 50.0 | 0.657 |
| Tatoeba-test.fas-pol.fas.pol | 4.5 | 0.223 |
| Tatoeba-test.fas-por.fas.por | 8.6 | 0.316 |
| Tatoeba-test.fas-ron.fas.ron | 19.1 | 0.445 |
| Tatoeba-test.fas-rus.fas.rus | 9.8 | 0.313 |
| Tatoeba-test.fas-spa.fas.spa | 9.1 | 0.318 |
| Tatoeba-test.fas-ukr.fas.ukr | 4.8 | 0.213 |
| Tatoeba-test.fas-yid.fas.yid | 2.0 | 0.138 |
| Tatoeba-test.fra-afr.fra.afr | 49.7 | 0.630 |
| Tatoeba-test.fra-ang.fra.ang | 1.0 | 0.105 |
| Tatoeba-test.fra-arg.fra.arg | 0.0 | 0.011 |
| Tatoeba-test.fra-asm.fra.asm | 4.1 | 0.194 |
| Tatoeba-test.fra-ast.fra.ast | 23.0 | 0.410 |
| Tatoeba-test.fra-bel.fra.bel | 22.2 | 0.448 |
| Tatoeba-test.fra-ben.fra.ben | 6.4 | 0.341 |
| Tatoeba-test.fra-bho.fra.bho | 1.2 | 0.035 |
| Tatoeba-test.fra-bre.fra.bre | 3.4 | 0.204 |
| Tatoeba-test.fra-bul.fra.bul | 31.2 | 0.528 |
| Tatoeba-test.fra-cat.fra.cat | 33.9 | 0.570 |
| Tatoeba-test.fra-ces.fra.ces | 26.9 | 0.490 |
| Tatoeba-test.fra-cor.fra.cor | 0.2 | 0.039 |
| Tatoeba-test.fra-cos.fra.cos | 0.3 | 0.061 |
| Tatoeba-test.fra-cym.fra.cym | 17.3 | 0.455 |
| Tatoeba-test.fra-dan.fra.dan | 47.1 | 0.634 |
| Tatoeba-test.fra-deu.fra.deu | 31.1 | 0.530 |
| Tatoeba-test.fra-egl.fra.egl | 0.7 | 0.061 |
| Tatoeba-test.fra-ell.fra.ell | 32.4 | 0.544 |
| Tatoeba-test.fra-eng.fra.eng | 40.1 | 0.583 |
| Tatoeba-test.fra-enm.fra.enm | 5.1 | 0.207 |
| Tatoeba-test.fra-fao.fra.fao | 1.8 | 0.304 |
| Tatoeba-test.fra-fas.fra.fas | 5.6 | 0.233 |
| Tatoeba-test.fra-frm.fra.frm | 0.3 | 0.149 |
| Tatoeba-test.fra-frr.fra.frr | 6.4 | 0.412 |
| Tatoeba-test.fra-fry.fra.fry | 11.4 | 0.357 |
| Tatoeba-test.fra-gcf.fra.gcf | 0.1 | 0.067 |
| Tatoeba-test.fra-gla.fra.gla | 9.1 | 0.316 |
| Tatoeba-test.fra-gle.fra.gle | 16.8 | 0.416 |
| Tatoeba-test.fra-glg.fra.glg | 34.5 | 0.562 |
| Tatoeba-test.fra-gos.fra.gos | 5.5 | 0.204 |
| Tatoeba-test.fra-got.fra.got | 0.2 | 0.001 |
| Tatoeba-test.fra-grc.fra.grc | 0.1 | 0.006 |
| Tatoeba-test.fra-hat.fra.hat | 20.8 | 0.424 |
| Tatoeba-test.fra-hbs.fra.hbs | 28.9 | 0.511 |
| Tatoeba-test.fra-hin.fra.hin | 5.1 | 0.336 |
| Tatoeba-test.fra-hye.fra.hye | 11.5 | 0.401 |
| Tatoeba-test.fra-isl.fra.isl | 17.2 | 0.362 |
| Tatoeba-test.fra-ita.fra.ita | 37.7 | 0.606 |
| Tatoeba-test.fra-ksh.fra.ksh | 2.8 | 0.148 |
| Tatoeba-test.fra-kur.fra.kur | 14.3 | 0.188 |
| Tatoeba-test.fra-lad.fra.lad | 0.4 | 0.129 |
| Tatoeba-test.fra-lat.fra.lat | 2.8 | 0.258 |
| Tatoeba-test.fra-lav.fra.lav | 30.3 | 0.490 |
| Tatoeba-test.fra-lij.fra.lij | 0.3 | 0.099 |
| Tatoeba-test.fra-lit.fra.lit | 18.3 | 0.461 |
| Tatoeba-test.fra-lld.fra.lld | 0.6 | 0.185 |
| Tatoeba-test.fra-lmo.fra.lmo | 1.2 | 0.163 |
| Tatoeba-test.fra-ltz.fra.ltz | 15.3 | 0.385 |
| Tatoeba-test.fra-mar.fra.mar | 45.7 | 0.393 |
| Tatoeba-test.fra-mkd.fra.mkd | 29.5 | 0.498 |
| Tatoeba-test.fra-msa.fra.msa | 19.4 | 0.456 |
| Tatoeba-test.fra-nds.fra.nds | 12.9 | 0.356 |
| Tatoeba-test.fra-nld.fra.nld | 33.0 | 0.532 |
| Tatoeba-test.fra-non.fra.non | 1.2 | 0.072 |
| Tatoeba-test.fra-nor.fra.nor | 35.1 | 0.553 |
| Tatoeba-test.fra-oci.fra.oci | 6.8 | 0.313 |
| Tatoeba-test.fra-orv.fra.orv | 0.2 | 0.004 |
| Tatoeba-test.fra-oss.fra.oss | 3.6 | 0.112 |
| Tatoeba-test.fra-pap.fra.pap | 78.3 | 0.917 |
| Tatoeba-test.fra-pcd.fra.pcd | 0.1 | 0.084 |
| Tatoeba-test.fra-pms.fra.pms | 0.3 | 0.117 |
| Tatoeba-test.fra-pol.fra.pol | 22.4 | 0.468 |
| Tatoeba-test.fra-por.fra.por | 33.0 | 0.559 |
| Tatoeba-test.fra-prg.fra.prg | 0.6 | 0.084 |
| Tatoeba-test.fra-roh.fra.roh | 5.9 | 0.278 |
| Tatoeba-test.fra-rom.fra.rom | 4.2 | 0.257 |
| Tatoeba-test.fra-ron.fra.ron | 29.7 | 0.531 |
| Tatoeba-test.fra-rus.fra.rus | 28.8 | 0.498 |
| Tatoeba-test.fra-scn.fra.scn | 0.4 | 0.056 |
| Tatoeba-test.fra-sco.fra.sco | 1.7 | 0.222 |
| Tatoeba-test.fra-slv.fra.slv | 2.4 | 0.207 |
| Tatoeba-test.fra-spa.fra.spa | 38.6 | 0.598 |
| Tatoeba-test.fra-sqi.fra.sqi | 23.9 | 0.455 |
| Tatoeba-test.fra-srd.fra.srd | 1.2 | 0.159 |
| Tatoeba-test.fra-swe.fra.swe | 44.2 | 0.609 |
| Tatoeba-test.fra-swg.fra.swg | 2.4 | 0.123 |
| Tatoeba-test.fra-tgk.fra.tgk | 2.8 | 0.244 |
| Tatoeba-test.fra-tly.fra.tly | 0.5 | 0.034 |
| Tatoeba-test.fra-ukr.fra.ukr | 26.7 | 0.474 |
| Tatoeba-test.fra-urd.fra.urd | 2.3 | 0.333 |
| Tatoeba-test.fra-vec.fra.vec | 0.6 | 0.088 |
| Tatoeba-test.fra-wln.fra.wln | 5.3 | 0.178 |
| Tatoeba-test.fra-yid.fra.yid | 8.7 | 0.271 |
| Tatoeba-test.frm-eng.frm.eng | 19.2 | 0.394 |
| Tatoeba-test.frm-fra.frm.fra | 12.3 | 0.482 |
| Tatoeba-test.frr-deu.frr.deu | 8.3 | 0.286 |
| Tatoeba-test.frr-eng.frr.eng | 6.1 | 0.181 |
| Tatoeba-test.frr-fra.frr.fra | 12.7 | 0.535 |
| Tatoeba-test.frr-fry.frr.fry | 4.1 | 0.144 |
| Tatoeba-test.frr-gos.frr.gos | 0.5 | 0.033 |
| Tatoeba-test.frr-nds.frr.nds | 12.4 | 0.127 |
| Tatoeba-test.frr-nld.frr.nld | 6.9 | 0.233 |
| Tatoeba-test.frr-stq.frr.stq | 0.5 | 0.045 |
| Tatoeba-test.fry-afr.fry.afr | 0.0 | 0.244 |
| Tatoeba-test.fry-ces.fry.ces | 4.2 | 0.280 |
| Tatoeba-test.fry-deu.fry.deu | 21.7 | 0.448 |
| Tatoeba-test.fry-eng.fry.eng | 22.9 | 0.431 |
| Tatoeba-test.fry-enm.fry.enm | 10.7 | 0.140 |
| Tatoeba-test.fry-fra.fry.fra | 31.8 | 0.455 |
| Tatoeba-test.fry-frr.fry.frr | 0.5 | 0.040 |
| Tatoeba-test.fry-gos.fry.gos | 0.7 | 0.204 |
| Tatoeba-test.fry-ita.fry.ita | 34.8 | 0.528 |
| Tatoeba-test.fry-lat.fry.lat | 8.1 | 0.318 |
| Tatoeba-test.fry-ltz.fry.ltz | 21.4 | 0.324 |
| Tatoeba-test.fry-msa.fry.msa | 0.1 | 0.000 |
| Tatoeba-test.fry-nds.fry.nds | 6.6 | 0.127 |
| Tatoeba-test.fry-nld.fry.nld | 35.7 | 0.576 |
| Tatoeba-test.fry-nor.fry.nor | 32.6 | 0.511 |
| Tatoeba-test.fry-pol.fry.pol | 17.7 | 0.342 |
| Tatoeba-test.fry-por.fry.por | 12.1 | 0.304 |
| Tatoeba-test.fry-rus.fry.rus | 31.7 | 0.438 |
| Tatoeba-test.fry-spa.fry.spa | 30.6 | 0.479 |
| Tatoeba-test.fry-stq.fry.stq | 0.5 | 0.156 |
| Tatoeba-test.fry-swe.fry.swe | 27.5 | 0.247 |
| Tatoeba-test.fry-ukr.fry.ukr | 16.1 | 0.330 |
| Tatoeba-test.fry-yid.fry.yid | 4.0 | 0.167 |
| Tatoeba-test.gcf-eng.gcf.eng | 13.2 | 0.257 |
| Tatoeba-test.gcf-fra.gcf.fra | 6.0 | 0.241 |
| Tatoeba-test.gcf-lad.gcf.lad | 0.0 | 0.170 |
| Tatoeba-test.gcf-por.gcf.por | 0.0 | 0.427 |
| Tatoeba-test.gcf-rus.gcf.rus | 0.0 | 1.000 |
| Tatoeba-test.gcf-spa.gcf.spa | 31.8 | 0.374 |
| Tatoeba-test.gla-cym.gla.cym | 11.5 | 0.416 |
| Tatoeba-test.gla-deu.gla.deu | 15.1 | 0.348 |
| Tatoeba-test.gla-eng.gla.eng | 17.5 | 0.329 |
| Tatoeba-test.gla-fra.gla.fra | 13.1 | 0.346 |
| Tatoeba-test.gla-ita.gla.ita | 12.1 | 0.306 |
| Tatoeba-test.gla-ksh.gla.ksh | 8.0 | 0.035 |
| Tatoeba-test.gla-pol.gla.pol | 20.8 | 0.299 |
| Tatoeba-test.gla-por.gla.por | 13.7 | 0.355 |
| Tatoeba-test.gla-rus.gla.rus | 24.7 | 0.423 |
| Tatoeba-test.gla-spa.gla.spa | 12.7 | 0.322 |
| Tatoeba-test.gle-cym.gle.cym | 7.8 | 0.288 |
| Tatoeba-test.gle-deu.gle.deu | 13.5 | 0.390 |
| Tatoeba-test.gle-eng.gle.eng | 32.0 | 0.490 |
| Tatoeba-test.gle-enm.gle.enm | 5.0 | 0.135 |
| Tatoeba-test.gle-fra.gle.fra | 18.0 | 0.403 |
| Tatoeba-test.gle-glv.gle.glv | 16.9 | 0.377 |
| Tatoeba-test.gle-kur.gle.kur | 0.0 | 0.077 |
| Tatoeba-test.gle-lad.gle.lad | 2.4 | 0.328 |
| Tatoeba-test.gle-ron.gle.ron | 0.0 | 0.673 |
| Tatoeba-test.gle-rus.gle.rus | 2.5 | 0.139 |
| Tatoeba-test.gle-spa.gle.spa | 24.5 | 0.458 |
| Tatoeba-test.gle-yid.gle.yid | 13.3 | 0.324 |
| Tatoeba-test.glg-deu.glg.deu | 30.4 | 0.539 |
| Tatoeba-test.glg-ell.glg.ell | 30.2 | 0.448 |
| Tatoeba-test.glg-eng.glg.eng | 37.9 | 0.571 |
| Tatoeba-test.glg-fra.glg.fra | 45.8 | 0.627 |
| Tatoeba-test.glg-ita.glg.ita | 31.1 | 0.561 |
| Tatoeba-test.glg-nld.glg.nld | 36.2 | 0.573 |
| Tatoeba-test.glg-pol.glg.pol | 22.7 | 0.524 |
| Tatoeba-test.glg-por.glg.por | 47.4 | 0.674 |
| Tatoeba-test.glg-rus.glg.rus | 28.4 | 0.465 |
| Tatoeba-test.glg-spa.glg.spa | 53.2 | 0.704 |
| Tatoeba-test.glv-cym.glv.cym | 1.4 | 0.140 |
| Tatoeba-test.glv-eng.glv.eng | 3.2 | 0.104 |
| Tatoeba-test.glv-gle.glv.gle | 9.9 | 0.243 |
| Tatoeba-test.gos-afr.gos.afr | 6.2 | 0.269 |
| Tatoeba-test.gos-ang.gos.ang | 0.0 | 0.056 |
| Tatoeba-test.gos-ast.gos.ast | 6.6 | 0.107 |
| Tatoeba-test.gos-dan.gos.dan | 12.0 | 0.356 |
| Tatoeba-test.gos-deu.gos.deu | 15.7 | 0.384 |
| Tatoeba-test.gos-eng.gos.eng | 14.8 | 0.320 |
| Tatoeba-test.gos-enm.gos.enm | 4.1 | 0.292 |
| Tatoeba-test.gos-fao.gos.fao | 19.0 | 0.111 |
| Tatoeba-test.gos-fra.gos.fra | 8.4 | 0.321 |
| Tatoeba-test.gos-frr.gos.frr | 0.9 | 0.064 |
| Tatoeba-test.gos-fry.gos.fry | 13.5 | 0.361 |
| Tatoeba-test.gos-isl.gos.isl | 8.2 | 0.228 |
| Tatoeba-test.gos-ita.gos.ita | 31.9 | 0.610 |
| Tatoeba-test.gos-kur.gos.kur | 0.0 | 0.050 |
| Tatoeba-test.gos-lad.gos.lad | 0.5 | 0.010 |
| Tatoeba-test.gos-lat.gos.lat | 4.5 | 0.206 |
| Tatoeba-test.gos-ltz.gos.ltz | 4.2 | 0.220 |
| Tatoeba-test.gos-nds.gos.nds | 3.9 | 0.202 |
| Tatoeba-test.gos-nld.gos.nld | 16.8 | 0.389 |
| Tatoeba-test.gos-rus.gos.rus | 5.2 | 0.298 |
| Tatoeba-test.gos-spa.gos.spa | 24.7 | 0.406 |
| Tatoeba-test.gos-stq.gos.stq | 0.4 | 0.137 |
| Tatoeba-test.gos-swe.gos.swe | 16.8 | 0.310 |
| Tatoeba-test.gos-ukr.gos.ukr | 5.4 | 0.370 |
| Tatoeba-test.gos-yid.gos.yid | 4.3 | 0.170 |
| Tatoeba-test.got-deu.got.deu | 0.6 | 0.044 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.050 |
| Tatoeba-test.got-fra.got.fra | 0.2 | 0.064 |
| Tatoeba-test.got-nor.got.nor | 3.1 | 0.013 |
| Tatoeba-test.got-spa.got.spa | 0.2 | 0.050 |
| Tatoeba-test.grc-ces.grc.ces | 2.7 | 0.155 |
| Tatoeba-test.grc-deu.grc.deu | 4.7 | 0.198 |
| Tatoeba-test.grc-eng.grc.eng | 1.9 | 0.146 |
| Tatoeba-test.grc-fra.grc.fra | 12.8 | 0.234 |
| Tatoeba-test.grc-lat.grc.lat | 0.5 | 0.114 |
| Tatoeba-test.grc-por.grc.por | 0.8 | 0.163 |
| Tatoeba-test.grc-spa.grc.spa | 2.4 | 0.141 |
| Tatoeba-test.gsw-deu.gsw.deu | 12.6 | 0.393 |
| Tatoeba-test.gsw-eng.gsw.eng | 15.9 | 0.322 |
| Tatoeba-test.gsw-spa.gsw.spa | 19.0 | 0.308 |
| Tatoeba-test.guj-eng.guj.eng | 15.9 | 0.301 |
| Tatoeba-test.guj-spa.guj.spa | 14.7 | 0.250 |
| Tatoeba-test.hat-eng.hat.eng | 38.5 | 0.522 |
| Tatoeba-test.hat-fra.hat.fra | 17.6 | 0.424 |
| Tatoeba-test.hat-nld.hat.nld | 32.0 | 0.472 |
| Tatoeba-test.hat-spa.hat.spa | 31.2 | 0.496 |
| Tatoeba-test.hbs-bel.hbs.bel | 40.1 | 0.579 |
| Tatoeba-test.hbs-bul.hbs.bul | 100.0 | 1.000 |
| Tatoeba-test.hbs-ces.hbs.ces | 27.8 | 0.543 |
| Tatoeba-test.hbs-deu.hbs.deu | 32.9 | 0.545 |
| Tatoeba-test.hbs-eng.hbs.eng | 38.6 | 0.563 |
| Tatoeba-test.hbs-enm.hbs.enm | 2.3 | 0.299 |
| Tatoeba-test.hbs-fra.hbs.fra | 33.3 | 0.548 |
| Tatoeba-test.hbs-ita.hbs.ita | 37.9 | 0.602 |
| Tatoeba-test.hbs-lat.hbs.lat | 9.8 | 0.289 |
| Tatoeba-test.hbs-mkd.hbs.mkd | 38.0 | 0.718 |
| Tatoeba-test.hbs-nor.hbs.nor | 31.8 | 0.528 |
| Tatoeba-test.hbs-pol.hbs.pol | 31.7 | 0.548 |
| Tatoeba-test.hbs-por.hbs.por | 28.1 | 0.484 |
| Tatoeba-test.hbs-rus.hbs.rus | 38.9 | 0.596 |
| Tatoeba-test.hbs-spa.hbs.spa | 38.6 | 0.589 |
| Tatoeba-test.hbs-swe.hbs.swe | 100.0 | 1.000 |
| Tatoeba-test.hbs-ukr.hbs.ukr | 36.0 | 0.557 |
| Tatoeba-test.hbs-urd.hbs.urd | 8.1 | 0.441 |
| Tatoeba-test.hif-eng.hif.eng | 8.9 | 0.439 |
| Tatoeba-test.hin-asm.hin.asm | 8.8 | 0.288 |
| Tatoeba-test.hin-deu.hin.deu | 26.1 | 0.414 |
| Tatoeba-test.hin-eng.hin.eng | 25.5 | 0.440 |
| Tatoeba-test.hin-fra.hin.fra | 30.1 | 0.449 |
| Tatoeba-test.hin-mar.hin.mar | 12.6 | 0.412 |
| Tatoeba-test.hin-nor.hin.nor | 9.9 | 0.416 |
| Tatoeba-test.hin-pol.hin.pol | 8.4 | 0.289 |
| Tatoeba-test.hin-rus.hin.rus | 21.2 | 0.395 |
| Tatoeba-test.hin-spa.hin.spa | 25.9 | 0.384 |
| Tatoeba-test.hin-swe.hin.swe | 100.0 | 1.000 |
| Tatoeba-test.hin-urd.hin.urd | 10.4 | 0.376 |
| Tatoeba-test.hsb-ces.hsb.ces | 18.1 | 0.373 |
| Tatoeba-test.hsb-deu.hsb.deu | 24.4 | 0.467 |
| Tatoeba-test.hsb-eng.hsb.eng | 42.9 | 0.583 |
| Tatoeba-test.hsb-spa.hsb.spa | 19.5 | 0.444 |
| Tatoeba-test.hye-deu.hye.deu | 11.6 | 0.323 |
| Tatoeba-test.hye-eng.hye.eng | 22.1 | 0.398 |
| Tatoeba-test.hye-fra.hye.fra | 32.1 | 0.386 |
| Tatoeba-test.hye-rus.hye.rus | 21.9 | 0.407 |
| Tatoeba-test.hye-spa.hye.spa | 29.3 | 0.476 |
| Tatoeba-test.isl-afr.isl.afr | 40.5 | 0.708 |
| Tatoeba-test.isl-ang.isl.ang | 0.0 | 0.034 |
| Tatoeba-test.isl-dan.isl.dan | 38.1 | 0.582 |
| Tatoeba-test.isl-deu.isl.deu | 31.8 | 0.511 |
| Tatoeba-test.isl-eng.isl.eng | 29.8 | 0.483 |
| Tatoeba-test.isl-enm.isl.enm | 39.8 | 0.336 |
| Tatoeba-test.isl-fao.isl.fao | 26.3 | 0.441 |
| Tatoeba-test.isl-fra.isl.fra | 27.3 | 0.469 |
| Tatoeba-test.isl-gos.isl.gos | 1.9 | 0.047 |
| Tatoeba-test.isl-ita.isl.ita | 28.9 | 0.501 |
| Tatoeba-test.isl-lat.isl.lat | 2.6 | 0.135 |
| Tatoeba-test.isl-lav.isl.lav | 59.6 | 0.740 |
| Tatoeba-test.isl-msa.isl.msa | 0.1 | 0.012 |
| Tatoeba-test.isl-nor.isl.nor | 40.2 | 0.566 |
| Tatoeba-test.isl-pol.isl.pol | 19.7 | 0.358 |
| Tatoeba-test.isl-por.isl.por | 17.4 | 0.465 |
| Tatoeba-test.isl-rus.isl.rus | 18.0 | 0.386 |
| Tatoeba-test.isl-spa.isl.spa | 30.7 | 0.496 |
| Tatoeba-test.isl-stq.isl.stq | 10.7 | 0.133 |
| Tatoeba-test.isl-swe.isl.swe | 38.1 | 0.539 |
| Tatoeba-test.ita-afr.ita.afr | 53.2 | 0.676 |
| Tatoeba-test.ita-ang.ita.ang | 3.8 | 0.125 |
| Tatoeba-test.ita-asm.ita.asm | 3.4 | 0.252 |
| Tatoeba-test.ita-bel.ita.bel | 24.2 | 0.460 |
| Tatoeba-test.ita-ben.ita.ben | 12.1 | 0.427 |
| Tatoeba-test.ita-bre.ita.bre | 4.7 | 0.287 |
| Tatoeba-test.ita-bul.ita.bul | 27.8 | 0.482 |
| Tatoeba-test.ita-cat.ita.cat | 40.6 | 0.608 |
| Tatoeba-test.ita-ces.ita.ces | 23.1 | 0.450 |
| Tatoeba-test.ita-cor.ita.cor | 0.8 | 0.060 |
| Tatoeba-test.ita-cym.ita.cym | 10.1 | 0.375 |
| Tatoeba-test.ita-dan.ita.dan | 38.9 | 0.577 |
| Tatoeba-test.ita-deu.ita.deu | 31.7 | 0.539 |
| Tatoeba-test.ita-egl.ita.egl | 0.2 | 0.061 |
| Tatoeba-test.ita-ell.ita.ell | 31.5 | 0.539 |
| Tatoeba-test.ita-eng.ita.eng | 47.4 | 0.633 |
| Tatoeba-test.ita-enm.ita.enm | 6.4 | 0.247 |
| Tatoeba-test.ita-fas.ita.fas | 4.2 | 0.236 |
| Tatoeba-test.ita-fra.ita.fra | 46.6 | 0.642 |
| Tatoeba-test.ita-fry.ita.fry | 20.0 | 0.409 |
| Tatoeba-test.ita-gla.ita.gla | 7.8 | 0.312 |
| Tatoeba-test.ita-glg.ita.glg | 36.3 | 0.577 |
| Tatoeba-test.ita-gos.ita.gos | 1.1 | 0.030 |
| Tatoeba-test.ita-hbs.ita.hbs | 39.4 | 0.595 |
| Tatoeba-test.ita-isl.ita.isl | 18.5 | 0.408 |
| Tatoeba-test.ita-kur.ita.kur | 1.9 | 0.160 |
| Tatoeba-test.ita-lad.ita.lad | 1.0 | 0.178 |
| Tatoeba-test.ita-lat.ita.lat | 7.1 | 0.320 |
| Tatoeba-test.ita-lav.ita.lav | 29.0 | 0.511 |
| Tatoeba-test.ita-lij.ita.lij | 0.2 | 0.107 |
| Tatoeba-test.ita-lit.ita.lit | 20.7 | 0.475 |
| Tatoeba-test.ita-ltz.ita.ltz | 20.6 | 0.373 |
| Tatoeba-test.ita-msa.ita.msa | 14.3 | 0.409 |
| Tatoeba-test.ita-nds.ita.nds | 13.3 | 0.378 |
| Tatoeba-test.ita-nld.ita.nld | 37.8 | 0.578 |
| Tatoeba-test.ita-nor.ita.nor | 35.7 | 0.578 |
| Tatoeba-test.ita-oci.ita.oci | 11.0 | 0.369 |
| Tatoeba-test.ita-orv.ita.orv | 1.2 | 0.010 |
| Tatoeba-test.ita-pms.ita.pms | 0.2 | 0.110 |
| Tatoeba-test.ita-pol.ita.pol | 25.9 | 0.507 |
| Tatoeba-test.ita-por.ita.por | 36.8 | 0.597 |
| Tatoeba-test.ita-ron.ita.ron | 34.3 | 0.574 |
| Tatoeba-test.ita-rus.ita.rus | 28.5 | 0.494 |
| Tatoeba-test.ita-slv.ita.slv | 11.7 | 0.364 |
| Tatoeba-test.ita-spa.ita.spa | 46.3 | 0.653 |
| Tatoeba-test.ita-sqi.ita.sqi | 21.9 | 0.418 |
| Tatoeba-test.ita-swe.ita.swe | 37.7 | 0.562 |
| Tatoeba-test.ita-ukr.ita.ukr | 33.1 | 0.538 |
| Tatoeba-test.ita-vec.ita.vec | 0.8 | 0.095 |
| Tatoeba-test.ita-yid.ita.yid | 10.3 | 0.280 |
| Tatoeba-test.jdt-eng.jdt.eng | 3.9 | 0.098 |
| Tatoeba-test.kok-eng.kok.eng | 5.0 | 0.217 |
| Tatoeba-test.ksh-deu.ksh.deu | 12.2 | 0.357 |
| Tatoeba-test.ksh-eng.ksh.eng | 4.1 | 0.237 |
| Tatoeba-test.ksh-enm.ksh.enm | 5.3 | 0.299 |
| Tatoeba-test.ksh-fra.ksh.fra | 15.3 | 0.322 |
| Tatoeba-test.ksh-gla.ksh.gla | 0.0 | 0.095 |
| Tatoeba-test.ksh-spa.ksh.spa | 11.3 | 0.272 |
| Tatoeba-test.kur-ang.kur.ang | 0.0 | 0.069 |
| Tatoeba-test.kur-bel.kur.bel | 35.4 | 0.540 |
| Tatoeba-test.kur-dan.kur.dan | 24.3 | 0.509 |
| Tatoeba-test.kur-deu.kur.deu | 12.0 | 0.226 |
| Tatoeba-test.kur-eng.kur.eng | 10.0 | 0.205 |
| Tatoeba-test.kur-enm.kur.enm | 5.5 | 0.048 |
| Tatoeba-test.kur-fra.kur.fra | 16.5 | 0.236 |
| Tatoeba-test.kur-gle.kur.gle | 7.6 | 0.081 |
| Tatoeba-test.kur-gos.kur.gos | 1.6 | 0.013 |
| Tatoeba-test.kur-ita.kur.ita | 11.4 | 0.362 |
| Tatoeba-test.kur-lad.kur.lad | 0.2 | 0.067 |
| Tatoeba-test.kur-lat.kur.lat | 6.1 | 0.240 |
| Tatoeba-test.kur-lld.kur.lld | 1.9 | 0.161 |
| Tatoeba-test.kur-nld.kur.nld | 3.3 | 0.155 |
| Tatoeba-test.kur-nor.kur.nor | 31.9 | 0.184 |
| Tatoeba-test.kur-pol.kur.pol | 5.0 | 0.230 |
| Tatoeba-test.kur-por.kur.por | 37.0 | 0.295 |
| Tatoeba-test.kur-rus.kur.rus | 1.3 | 0.184 |
| Tatoeba-test.kur-spa.kur.spa | 39.1 | 0.426 |
| Tatoeba-test.kur-swe.kur.swe | 4.3 | 0.206 |
| Tatoeba-test.kur-yid.kur.yid | 2.1 | 0.164 |
| Tatoeba-test.lad-ang.lad.ang | 1.4 | 0.046 |
| Tatoeba-test.lad-bel.lad.bel | 9.7 | 0.330 |
| Tatoeba-test.lad-bul.lad.bul | 35.4 | 0.529 |
| Tatoeba-test.lad-ces.lad.ces | 33.1 | 0.604 |
| Tatoeba-test.lad-dan.lad.dan | 15.4 | 0.325 |
| Tatoeba-test.lad-deu.lad.deu | 19.3 | 0.405 |
| Tatoeba-test.lad-eng.lad.eng | 23.1 | 0.421 |
| Tatoeba-test.lad-enm.lad.enm | 2.2 | 0.173 |
| Tatoeba-test.lad-fas.lad.fas | 5.2 | 0.194 |
| Tatoeba-test.lad-fra.lad.fra | 26.3 | 0.405 |
| Tatoeba-test.lad-gcf.lad.gcf | 0.0 | 0.170 |
| Tatoeba-test.lad-gle.lad.gle | 21.4 | 0.347 |
| Tatoeba-test.lad-gos.lad.gos | 1.2 | 0.058 |
| Tatoeba-test.lad-ita.lad.ita | 22.7 | 0.479 |
| Tatoeba-test.lad-kur.lad.kur | 2.4 | 0.190 |
| Tatoeba-test.lad-lat.lad.lat | 3.4 | 0.239 |
| Tatoeba-test.lad-ltz.lad.ltz | 45.5 | 0.580 |
| Tatoeba-test.lad-nds.lad.nds | 23.0 | 0.690 |
| Tatoeba-test.lad-nld.lad.nld | 33.5 | 0.449 |
| Tatoeba-test.lad-nor.lad.nor | 66.9 | 0.951 |
| Tatoeba-test.lad-pol.lad.pol | 0.0 | 0.076 |
| Tatoeba-test.lad-por.lad.por | 27.5 | 0.448 |
| Tatoeba-test.lad-ron.lad.ron | 78.3 | 0.693 |
| Tatoeba-test.lad-rus.lad.rus | 6.5 | 0.308 |
| Tatoeba-test.lad-sco.lad.sco | 0.0 | 0.179 |
| Tatoeba-test.lad-slv.lad.slv | 59.5 | 0.602 |
| Tatoeba-test.lad-spa.lad.spa | 37.0 | 0.553 |
| Tatoeba-test.lad-swe.lad.swe | 66.9 | 0.783 |
| Tatoeba-test.lad-ukr.lad.ukr | 8.1 | 0.282 |
| Tatoeba-test.lad-yid.lad.yid | 4.8 | 0.212 |
| Tatoeba-test.lah-eng.lah.eng | 5.0 | 0.237 |
| Tatoeba-test.lat-afr.lat.afr | 100.0 | 1.000 |
| Tatoeba-test.lat-ang.lat.ang | 0.9 | 0.068 |
| Tatoeba-test.lat-bel.lat.bel | 10.6 | 0.284 |
| Tatoeba-test.lat-bul.lat.bul | 27.5 | 0.481 |
| Tatoeba-test.lat-ces.lat.ces | 15.6 | 0.331 |
| Tatoeba-test.lat-cym.lat.cym | 2.9 | 0.203 |
| Tatoeba-test.lat-dan.lat.dan | 29.4 | 0.479 |
| Tatoeba-test.lat-deu.lat.deu | 19.9 | 0.391 |
| Tatoeba-test.lat-eng.lat.eng | 20.5 | 0.396 |
| Tatoeba-test.lat-enm.lat.enm | 1.0 | 0.082 |
| Tatoeba-test.lat-fas.lat.fas | 7.9 | 0.407 |
| Tatoeba-test.lat-fra.lat.fra | 9.3 | 0.286 |
| Tatoeba-test.lat-fry.lat.fry | 7.1 | 0.192 |
| Tatoeba-test.lat-gos.lat.gos | 3.6 | 0.150 |
| Tatoeba-test.lat-grc.lat.grc | 0.2 | 0.001 |
| Tatoeba-test.lat-hbs.lat.hbs | 15.1 | 0.322 |
| Tatoeba-test.lat-isl.lat.isl | 8.3 | 0.108 |
| Tatoeba-test.lat-ita.lat.ita | 20.7 | 0.415 |
| Tatoeba-test.lat-kur.lat.kur | 7.9 | 0.260 |
| Tatoeba-test.lat-lad.lat.lad | 0.2 | 0.087 |
| Tatoeba-test.lat-lit.lat.lit | 5.6 | 0.301 |
| Tatoeba-test.lat-ltz.lat.ltz | 10.2 | 0.352 |
| Tatoeba-test.lat-nld.lat.nld | 24.3 | 0.444 |
| Tatoeba-test.lat-nor.lat.nor | 14.5 | 0.338 |
| Tatoeba-test.lat-orv.lat.orv | 0.1 | 0.006 |
| Tatoeba-test.lat-pol.lat.pol | 21.8 | 0.412 |
| Tatoeba-test.lat-por.lat.por | 12.2 | 0.336 |
| Tatoeba-test.lat-ron.lat.ron | 12.7 | 0.343 |
| Tatoeba-test.lat-rus.lat.rus | 16.6 | 0.362 |
| Tatoeba-test.lat-sco.lat.sco | 3.2 | 0.215 |
| Tatoeba-test.lat-spa.lat.spa | 18.9 | 0.414 |
| Tatoeba-test.lat-swe.lat.swe | 53.4 | 0.708 |
| Tatoeba-test.lat-ukr.lat.ukr | 14.0 | 0.343 |
| Tatoeba-test.lat-yid.lat.yid | 2.1 | 0.182 |
| Tatoeba-test.lav-dan.lav.dan | 100.0 | 1.000 |
| Tatoeba-test.lav-deu.lav.deu | 34.5 | 0.540 |
| Tatoeba-test.lav-eng.lav.eng | 33.6 | 0.520 |
| Tatoeba-test.lav-fra.lav.fra | 40.5 | 0.598 |
| Tatoeba-test.lav-isl.lav.isl | 72.7 | 0.770 |
| Tatoeba-test.lav-ita.lav.ita | 30.5 | 0.570 |
| Tatoeba-test.lav-lav.lav.lav | 5.7 | 0.362 |
| Tatoeba-test.lav-lit.lav.lit | 23.5 | 0.504 |
| Tatoeba-test.lav-mkd.lav.mkd | 13.7 | 0.550 |
| Tatoeba-test.lav-pol.lav.pol | 37.6 | 0.551 |
| Tatoeba-test.lav-rus.lav.rus | 32.5 | 0.517 |
| Tatoeba-test.lav-slv.lav.slv | 8.6 | 0.483 |
| Tatoeba-test.lav-spa.lav.spa | 26.6 | 0.511 |
| Tatoeba-test.lav-swe.lav.swe | 95.1 | 0.958 |
| Tatoeba-test.lav-ukr.lav.ukr | 9.0 | 0.488 |
| Tatoeba-test.lij-eng.lij.eng | 6.8 | 0.251 |
| Tatoeba-test.lij-fra.lij.fra | 12.2 | 0.329 |
| Tatoeba-test.lij-ita.lij.ita | 10.4 | 0.366 |
| Tatoeba-test.lit-deu.lit.deu | 25.7 | 0.472 |
| Tatoeba-test.lit-eng.lit.eng | 37.5 | 0.551 |
| Tatoeba-test.lit-fra.lit.fra | 32.1 | 0.489 |
| Tatoeba-test.lit-ita.lit.ita | 22.3 | 0.460 |
| Tatoeba-test.lit-lat.lit.lat | 7.4 | 0.195 |
| Tatoeba-test.lit-lav.lit.lav | 22.6 | 0.378 |
| Tatoeba-test.lit-mkd.lit.mkd | 9.7 | 0.282 |
| Tatoeba-test.lit-msa.lit.msa | 7.2 | 0.374 |
| Tatoeba-test.lit-pol.lit.pol | 30.9 | 0.529 |
| Tatoeba-test.lit-por.lit.por | 25.0 | 0.439 |
| Tatoeba-test.lit-rus.lit.rus | 30.6 | 0.504 |
| Tatoeba-test.lit-slv.lit.slv | 8.6 | 0.331 |
| Tatoeba-test.lit-spa.lit.spa | 32.9 | 0.516 |
| Tatoeba-test.lit-ukr.lit.ukr | 19.6 | 0.371 |
| Tatoeba-test.lit-yid.lit.yid | 6.5 | 0.360 |
| Tatoeba-test.lld-eng.lld.eng | 13.7 | 0.310 |
| Tatoeba-test.lld-fra.lld.fra | 13.1 | 0.368 |
| Tatoeba-test.lld-kur.lld.kur | 3.4 | 0.064 |
| Tatoeba-test.lld-spa.lld.spa | 9.3 | 0.351 |
| Tatoeba-test.lmo-eng.lmo.eng | 22.3 | 0.323 |
| Tatoeba-test.lmo-fra.lmo.fra | 10.9 | 0.333 |
| Tatoeba-test.ltz-afr.ltz.afr | 49.5 | 0.589 |
| Tatoeba-test.ltz-ang.ltz.ang | 0.0 | 0.051 |
| Tatoeba-test.ltz-ces.ltz.ces | 9.7 | 0.353 |
| Tatoeba-test.ltz-dan.ltz.dan | 65.1 | 0.463 |
| Tatoeba-test.ltz-deu.ltz.deu | 35.6 | 0.533 |
| Tatoeba-test.ltz-eng.ltz.eng | 33.7 | 0.448 |
| Tatoeba-test.ltz-fra.ltz.fra | 24.3 | 0.451 |
| Tatoeba-test.ltz-fry.ltz.fry | 23.4 | 0.621 |
| Tatoeba-test.ltz-gos.ltz.gos | 0.5 | 0.104 |
| Tatoeba-test.ltz-ita.ltz.ita | 14.2 | 0.412 |
| Tatoeba-test.ltz-lad.ltz.lad | 7.8 | 0.179 |
| Tatoeba-test.ltz-lat.ltz.lat | 7.6 | 0.106 |
| Tatoeba-test.ltz-nld.ltz.nld | 32.4 | 0.488 |
| Tatoeba-test.ltz-nor.ltz.nor | 27.8 | 0.599 |
| Tatoeba-test.ltz-por.ltz.por | 12.7 | 0.319 |
| Tatoeba-test.ltz-rus.ltz.rus | 18.0 | 0.392 |
| Tatoeba-test.ltz-spa.ltz.spa | 15.6 | 0.458 |
| Tatoeba-test.ltz-stq.ltz.stq | 0.6 | 0.065 |
| Tatoeba-test.ltz-swe.ltz.swe | 32.5 | 0.403 |
| Tatoeba-test.ltz-yid.ltz.yid | 1.4 | 0.236 |
| Tatoeba-test.mai-eng.mai.eng | 49.8 | 0.429 |
| Tatoeba-test.mai-spa.mai.spa | 18.6 | 0.460 |
| Tatoeba-test.mar-dan.mar.dan | 5.1 | 0.230 |
| Tatoeba-test.mar-deu.mar.deu | 14.2 | 0.379 |
| Tatoeba-test.mar-eng.mar.eng | 20.0 | 0.422 |
| Tatoeba-test.mar-fra.mar.fra | 40.7 | 0.470 |
| Tatoeba-test.mar-hin.mar.hin | 7.3 | 0.407 |
| Tatoeba-test.mar-rus.mar.rus | 35.4 | 0.638 |
| Tatoeba-test.mfe-eng.mfe.eng | 49.0 | 0.615 |
| Tatoeba-test.mkd-afr.mkd.afr | 42.7 | 0.655 |
| Tatoeba-test.mkd-bel.mkd.bel | 9.7 | 0.362 |
| Tatoeba-test.mkd-bul.mkd.bul | 61.6 | 0.819 |
| Tatoeba-test.mkd-ces.mkd.ces | 15.0 | 0.506 |
| Tatoeba-test.mkd-deu.mkd.deu | 31.0 | 0.548 |
| Tatoeba-test.mkd-eng.mkd.eng | 35.8 | 0.524 |
| Tatoeba-test.mkd-fra.mkd.fra | 30.2 | 0.486 |
| Tatoeba-test.mkd-hbs.mkd.hbs | 32.5 | 0.589 |
| Tatoeba-test.mkd-lav.mkd.lav | 16.6 | 0.557 |
| Tatoeba-test.mkd-lit.mkd.lit | 11.6 | 0.395 |
| Tatoeba-test.mkd-nld.mkd.nld | 42.7 | 0.680 |
| Tatoeba-test.mkd-pol.mkd.pol | 53.7 | 0.833 |
| Tatoeba-test.mkd-por.mkd.por | 10.1 | 0.492 |
| Tatoeba-test.mkd-ron.mkd.ron | 9.7 | 0.196 |
| Tatoeba-test.mkd-rus.mkd.rus | 24.7 | 0.727 |
| Tatoeba-test.mkd-spa.mkd.spa | 43.2 | 0.601 |
| Tatoeba-test.mkd-swe.mkd.swe | 23.6 | 0.361 |
| Tatoeba-test.mkd-ukr.mkd.ukr | 42.7 | 0.864 |
| Tatoeba-test.msa-afr.msa.afr | 3.4 | 0.323 |
| Tatoeba-test.msa-bel.msa.bel | 17.1 | 0.418 |
| Tatoeba-test.msa-bre.msa.bre | 1.8 | 0.199 |
| Tatoeba-test.msa-bul.msa.bul | 11.9 | 0.258 |
| Tatoeba-test.msa-ces.msa.ces | 3.4 | 0.115 |
| Tatoeba-test.msa-cym.msa.cym | 0.0 | 0.000 |
| Tatoeba-test.msa-deu.msa.deu | 23.5 | 0.470 |
| Tatoeba-test.msa-ell.msa.ell | 19.7 | 0.490 |
| Tatoeba-test.msa-eng.msa.eng | 27.8 | 0.472 |
| Tatoeba-test.msa-fao.msa.fao | 2.0 | 0.232 |
| Tatoeba-test.msa-fas.msa.fas | 5.9 | 0.241 |
| Tatoeba-test.msa-fra.msa.fra | 25.9 | 0.465 |
| Tatoeba-test.msa-fry.msa.fry | 1.7 | 0.195 |
| Tatoeba-test.msa-isl.msa.isl | 3.4 | 0.228 |
| Tatoeba-test.msa-ita.msa.ita | 23.4 | 0.481 |
| Tatoeba-test.msa-lit.msa.lit | 11.5 | 0.304 |
| Tatoeba-test.msa-msa.msa.msa | 5.8 | 0.243 |
| Tatoeba-test.msa-nld.msa.nld | 20.9 | 0.442 |
| Tatoeba-test.msa-nor.msa.nor | 14.8 | 0.431 |
| Tatoeba-test.msa-pap.msa.pap | 83.8 | 0.946 |
| Tatoeba-test.msa-pol.msa.pol | 9.1 | 0.349 |
| Tatoeba-test.msa-por.msa.por | 15.4 | 0.385 |
| Tatoeba-test.msa-ron.msa.ron | 3.4 | 0.195 |
| Tatoeba-test.msa-rus.msa.rus | 18.8 | 0.401 |
| Tatoeba-test.msa-san.msa.san | 0.0 | 0.056 |
| Tatoeba-test.msa-spa.msa.spa | 22.6 | 0.451 |
| Tatoeba-test.msa-ukr.msa.ukr | 5.7 | 0.267 |
| Tatoeba-test.msa-urd.msa.urd | 8.0 | 0.102 |
| Tatoeba-test.multi.multi | 30.8 | 0.509 |
| Tatoeba-test.mwl-eng.mwl.eng | 22.8 | 0.416 |
| Tatoeba-test.mwl-enm.mwl.enm | 7.0 | 0.321 |
| Tatoeba-test.mwl-por.mwl.por | 35.4 | 0.561 |
| Tatoeba-test.nds-ast.nds.ast | 42.7 | 0.835 |
| Tatoeba-test.nds-ces.nds.ces | 38.3 | 0.491 |
| Tatoeba-test.nds-dan.nds.dan | 18.5 | 0.399 |
| Tatoeba-test.nds-deu.nds.deu | 32.6 | 0.552 |
| Tatoeba-test.nds-ell.nds.ell | 18.1 | 0.426 |
| Tatoeba-test.nds-eng.nds.eng | 28.9 | 0.480 |
| Tatoeba-test.nds-enm.nds.enm | 6.9 | 0.198 |
| Tatoeba-test.nds-fas.nds.fas | 6.6 | 0.187 |
| Tatoeba-test.nds-fra.nds.fra | 31.9 | 0.498 |
| Tatoeba-test.nds-frr.nds.frr | 0.5 | 0.000 |
| Tatoeba-test.nds-fry.nds.fry | 0.0 | 0.023 |
| Tatoeba-test.nds-gos.nds.gos | 1.2 | 0.148 |
| Tatoeba-test.nds-ita.nds.ita | 28.5 | 0.505 |
| Tatoeba-test.nds-lad.nds.lad | 7.8 | 0.164 |
| Tatoeba-test.nds-nld.nds.nld | 38.2 | 0.584 |
| Tatoeba-test.nds-nor.nds.nor | 42.8 | 0.612 |
| Tatoeba-test.nds-pol.nds.pol | 15.3 | 0.405 |
| Tatoeba-test.nds-por.nds.por | 26.0 | 0.447 |
| Tatoeba-test.nds-ron.nds.ron | 0.0 | 0.353 |
| Tatoeba-test.nds-rus.nds.rus | 24.3 | 0.440 |
| Tatoeba-test.nds-spa.nds.spa | 31.7 | 0.527 |
| Tatoeba-test.nds-swg.nds.swg | 0.1 | 0.080 |
| Tatoeba-test.nds-ukr.nds.ukr | 20.1 | 0.464 |
| Tatoeba-test.nds-yid.nds.yid | 42.8 | 0.365 |
| Tatoeba-test.nep-eng.nep.eng | 2.1 | 0.161 |
| Tatoeba-test.nld-afr.nld.afr | 50.1 | 0.670 |
| Tatoeba-test.nld-ast.nld.ast | 42.7 | 0.835 |
| Tatoeba-test.nld-bel.nld.bel | 17.5 | 0.410 |
| Tatoeba-test.nld-bre.nld.bre | 3.2 | 0.189 |
| Tatoeba-test.nld-bul.nld.bul | 28.7 | 0.468 |
| Tatoeba-test.nld-cat.nld.cat | 31.9 | 0.546 |
| Tatoeba-test.nld-ces.nld.ces | 24.4 | 0.504 |
| Tatoeba-test.nld-cor.nld.cor | 0.6 | 0.048 |
| Tatoeba-test.nld-dan.nld.dan | 49.1 | 0.660 |
| Tatoeba-test.nld-deu.nld.deu | 38.3 | 0.589 |
| Tatoeba-test.nld-dsb.nld.dsb | 0.2 | 0.084 |
| Tatoeba-test.nld-ell.nld.ell | 35.3 | 0.528 |
| Tatoeba-test.nld-eng.nld.eng | 42.4 | 0.602 |
| Tatoeba-test.nld-enm.nld.enm | 6.1 | 0.269 |
| Tatoeba-test.nld-fas.nld.fas | 18.6 | 0.459 |
| Tatoeba-test.nld-fra.nld.fra | 35.7 | 0.549 |
| Tatoeba-test.nld-frr.nld.frr | 2.8 | 0.099 |
| Tatoeba-test.nld-fry.nld.fry | 19.2 | 0.438 |
| Tatoeba-test.nld-glg.nld.glg | 35.0 | 0.576 |
| Tatoeba-test.nld-gos.nld.gos | 0.5 | 0.129 |
| Tatoeba-test.nld-hat.nld.hat | 26.8 | 0.418 |
| Tatoeba-test.nld-ita.nld.ita | 35.3 | 0.580 |
| Tatoeba-test.nld-kur.nld.kur | 4.2 | 0.147 |
| Tatoeba-test.nld-lad.nld.lad | 0.7 | 0.101 |
| Tatoeba-test.nld-lat.nld.lat | 6.7 | 0.314 |
| Tatoeba-test.nld-ltz.nld.ltz | 17.6 | 0.384 |
| Tatoeba-test.nld-mkd.nld.mkd | 0.0 | 0.238 |
| Tatoeba-test.nld-msa.nld.msa | 3.6 | 0.210 |
| Tatoeba-test.nld-nds.nld.nds | 15.9 | 0.405 |
| Tatoeba-test.nld-nor.nld.nor | 42.4 | 0.618 |
| Tatoeba-test.nld-oci.nld.oci | 9.0 | 0.306 |
| Tatoeba-test.nld-pap.nld.pap | 38.9 | 0.531 |
| Tatoeba-test.nld-pol.nld.pol | 25.8 | 0.498 |
| Tatoeba-test.nld-por.nld.por | 31.7 | 0.535 |
| Tatoeba-test.nld-ron.nld.ron | 26.6 | 0.495 |
| Tatoeba-test.nld-rus.nld.rus | 30.0 | 0.512 |
| Tatoeba-test.nld-sco.nld.sco | 4.3 | 0.299 |
| Tatoeba-test.nld-spa.nld.spa | 35.0 | 0.560 |
| Tatoeba-test.nld-stq.nld.stq | 1.6 | 0.201 |
| Tatoeba-test.nld-swe.nld.swe | 72.2 | 0.801 |
| Tatoeba-test.nld-swg.nld.swg | 5.0 | 0.129 |
| Tatoeba-test.nld-ukr.nld.ukr | 26.2 | 0.481 |
| Tatoeba-test.nld-wln.nld.wln | 3.5 | 0.133 |
| Tatoeba-test.nld-yid.nld.yid | 11.5 | 0.293 |
| Tatoeba-test.non-eng.non.eng | 30.3 | 0.471 |
| Tatoeba-test.non-fra.non.fra | 90.1 | 0.839 |
| Tatoeba-test.nor-afr.nor.afr | 50.0 | 0.638 |
| Tatoeba-test.nor-bel.nor.bel | 42.2 | 0.467 |
| Tatoeba-test.nor-bre.nor.bre | 3.2 | 0.188 |
| Tatoeba-test.nor-bul.nor.bul | 35.4 | 0.529 |
| Tatoeba-test.nor-ces.nor.ces | 38.0 | 0.627 |
| Tatoeba-test.nor-cor.nor.cor | 3.2 | 0.072 |
| Tatoeba-test.nor-cym.nor.cym | 14.7 | 0.465 |
| Tatoeba-test.nor-dan.nor.dan | 59.0 | 0.757 |
| Tatoeba-test.nor-deu.nor.deu | 32.4 | 0.560 |
| Tatoeba-test.nor-ell.nor.ell | 29.9 | 0.507 |
| Tatoeba-test.nor-eng.nor.eng | 40.8 | 0.585 |
| Tatoeba-test.nor-enm.nor.enm | 4.2 | 0.303 |
| Tatoeba-test.nor-fao.nor.fao | 10.0 | 0.345 |
| Tatoeba-test.nor-fra.nor.fra | 38.4 | 0.572 |
| Tatoeba-test.nor-fry.nor.fry | 18.7 | 0.375 |
| Tatoeba-test.nor-got.nor.got | 10.7 | 0.015 |
| Tatoeba-test.nor-hbs.nor.hbs | 21.7 | 0.465 |
| Tatoeba-test.nor-hin.nor.hin | 14.8 | 0.307 |
| Tatoeba-test.nor-isl.nor.isl | 23.2 | 0.445 |
| Tatoeba-test.nor-ita.nor.ita | 35.2 | 0.594 |
| Tatoeba-test.nor-kur.nor.kur | 10.7 | 0.037 |
| Tatoeba-test.nor-lad.nor.lad | 6.6 | 0.370 |
| Tatoeba-test.nor-lat.nor.lat | 3.6 | 0.261 |
| Tatoeba-test.nor-ltz.nor.ltz | 12.2 | 0.404 |
| Tatoeba-test.nor-msa.nor.msa | 8.0 | 0.442 |
| Tatoeba-test.nor-nds.nor.nds | 20.3 | 0.466 |
| Tatoeba-test.nor-nld.nor.nld | 39.1 | 0.598 |
| Tatoeba-test.nor-nor.nor.nor | 49.0 | 0.698 |
| Tatoeba-test.nor-pol.nor.pol | 26.3 | 0.515 |
| Tatoeba-test.nor-por.nor.por | 31.0 | 0.543 |
| Tatoeba-test.nor-ron.nor.ron | 28.0 | 0.475 |
| Tatoeba-test.nor-rus.nor.rus | 28.1 | 0.513 |
| Tatoeba-test.nor-slv.nor.slv | 1.2 | 0.193 |
| Tatoeba-test.nor-spa.nor.spa | 38.2 | 0.598 |
| Tatoeba-test.nor-swe.nor.swe | 58.8 | 0.741 |
| Tatoeba-test.nor-ukr.nor.ukr | 29.1 | 0.515 |
| Tatoeba-test.nor-yid.nor.yid | 42.6 | 0.473 |
| Tatoeba-test.oci-deu.oci.deu | 11.2 | 0.346 |
| Tatoeba-test.oci-eng.oci.eng | 13.4 | 0.331 |
| Tatoeba-test.oci-enm.oci.enm | 5.3 | 0.206 |
| Tatoeba-test.oci-fra.oci.fra | 19.6 | 0.423 |
| Tatoeba-test.oci-ita.oci.ita | 24.5 | 0.493 |
| Tatoeba-test.oci-nld.oci.nld | 22.5 | 0.408 |
| Tatoeba-test.oci-pol.oci.pol | 8.8 | 0.322 |
| Tatoeba-test.oci-rus.oci.rus | 16.4 | 0.387 |
| Tatoeba-test.oci-spa.oci.spa | 20.4 | 0.442 |
| Tatoeba-test.oci-yid.oci.yid | 66.9 | 0.968 |
| Tatoeba-test.ori-eng.ori.eng | 3.9 | 0.168 |
| Tatoeba-test.ori-rus.ori.rus | 9.1 | 0.175 |
| Tatoeba-test.orv-deu.orv.deu | 5.8 | 0.256 |
| Tatoeba-test.orv-eng.orv.eng | 8.4 | 0.243 |
| Tatoeba-test.orv-fra.orv.fra | 8.9 | 0.244 |
| Tatoeba-test.orv-ita.orv.ita | 8.1 | 0.297 |
| Tatoeba-test.orv-lat.orv.lat | 1.2 | 0.207 |
| Tatoeba-test.orv-pol.orv.pol | 11.6 | 0.338 |
| Tatoeba-test.orv-rus.orv.rus | 8.2 | 0.234 |
| Tatoeba-test.orv-spa.orv.spa | 7.8 | 0.331 |
| Tatoeba-test.orv-ukr.orv.ukr | 6.4 | 0.217 |
| Tatoeba-test.oss-eng.oss.eng | 5.8 | 0.230 |
| Tatoeba-test.oss-fra.oss.fra | 10.8 | 0.279 |
| Tatoeba-test.oss-rus.oss.rus | 6.0 | 0.225 |
| Tatoeba-test.pan-eng.pan.eng | 6.1 | 0.256 |
| Tatoeba-test.pap-ell.pap.ell | 0.0 | 0.626 |
| Tatoeba-test.pap-eng.pap.eng | 45.7 | 0.586 |
| Tatoeba-test.pap-fra.pap.fra | 43.9 | 0.589 |
| Tatoeba-test.pap-msa.pap.msa | 0.0 | 0.347 |
| Tatoeba-test.pap-nld.pap.nld | 41.9 | 0.587 |
| Tatoeba-test.pcd-fra.pcd.fra | 14.4 | 0.365 |
| Tatoeba-test.pcd-spa.pcd.spa | 5.8 | 0.274 |
| Tatoeba-test.pdc-deu.pdc.deu | 33.0 | 0.474 |
| Tatoeba-test.pdc-eng.pdc.eng | 36.1 | 0.479 |
| Tatoeba-test.pms-cos.pms.cos | 0.7 | 0.026 |
| Tatoeba-test.pms-deu.pms.deu | 13.1 | 0.310 |
| Tatoeba-test.pms-eng.pms.eng | 8.8 | 0.296 |
| Tatoeba-test.pms-fra.pms.fra | 13.0 | 0.309 |
| Tatoeba-test.pms-ita.pms.ita | 10.0 | 0.327 |
| Tatoeba-test.pms-pol.pms.pol | 15.2 | 0.304 |
| Tatoeba-test.pms-spa.pms.spa | 10.4 | 0.352 |
| Tatoeba-test.pol-afr.pol.afr | 40.2 | 0.589 |
| Tatoeba-test.pol-bel.pol.bel | 24.8 | 0.503 |
| Tatoeba-test.pol-bul.pol.bul | 29.4 | 0.508 |
| Tatoeba-test.pol-cat.pol.cat | 20.3 | 0.416 |
| Tatoeba-test.pol-ces.pol.ces | 28.0 | 0.489 |
| Tatoeba-test.pol-cor.pol.cor | 1.3 | 0.052 |
| Tatoeba-test.pol-cym.pol.cym | 7.0 | 0.347 |
| Tatoeba-test.pol-dan.pol.dan | 37.0 | 0.551 |
| Tatoeba-test.pol-deu.pol.deu | 29.1 | 0.508 |
| Tatoeba-test.pol-dsb.pol.dsb | 0.8 | 0.070 |
| Tatoeba-test.pol-ell.pol.ell | 32.3 | 0.519 |
| Tatoeba-test.pol-eng.pol.eng | 34.1 | 0.531 |
| Tatoeba-test.pol-fao.pol.fao | 1.2 | 0.234 |
| Tatoeba-test.pol-fas.pol.fas | 6.5 | 0.208 |
| Tatoeba-test.pol-fra.pol.fra | 30.8 | 0.510 |
| Tatoeba-test.pol-fry.pol.fry | 7.2 | 0.287 |
| Tatoeba-test.pol-gla.pol.gla | 14.6 | 0.301 |
| Tatoeba-test.pol-glg.pol.glg | 18.4 | 0.498 |
| Tatoeba-test.pol-hbs.pol.hbs | 31.8 | 0.546 |
| Tatoeba-test.pol-hin.pol.hin | 3.5 | 0.193 |
| Tatoeba-test.pol-isl.pol.isl | 11.4 | 0.336 |
| Tatoeba-test.pol-ita.pol.ita | 28.5 | 0.522 |
| Tatoeba-test.pol-kur.pol.kur | 2.6 | 0.134 |
| Tatoeba-test.pol-lad.pol.lad | 16.0 | 0.265 |
| Tatoeba-test.pol-lat.pol.lat | 7.2 | 0.311 |
| Tatoeba-test.pol-lav.pol.lav | 22.9 | 0.450 |
| Tatoeba-test.pol-lit.pol.lit | 21.2 | 0.493 |
| Tatoeba-test.pol-mkd.pol.mkd | 38.0 | 0.718 |
| Tatoeba-test.pol-msa.pol.msa | 2.2 | 0.173 |
| Tatoeba-test.pol-nds.pol.nds | 14.4 | 0.370 |
| Tatoeba-test.pol-nld.pol.nld | 30.6 | 0.501 |
| Tatoeba-test.pol-nor.pol.nor | 33.3 | 0.536 |
| Tatoeba-test.pol-oci.pol.oci | 4.0 | 0.282 |
| Tatoeba-test.pol-orv.pol.orv | 0.4 | 0.005 |
| Tatoeba-test.pol-pms.pol.pms | 1.3 | 0.032 |
| Tatoeba-test.pol-por.pol.por | 25.9 | 0.491 |
| Tatoeba-test.pol-prg.pol.prg | 0.0 | 0.083 |
| Tatoeba-test.pol-ron.pol.ron | 26.5 | 0.487 |
| Tatoeba-test.pol-rus.pol.rus | 34.7 | 0.550 |
| Tatoeba-test.pol-slv.pol.slv | 7.4 | 0.256 |
| Tatoeba-test.pol-spa.pol.spa | 30.7 | 0.516 |
| Tatoeba-test.pol-swe.pol.swe | 35.0 | 0.530 |
| Tatoeba-test.pol-ukr.pol.ukr | 32.8 | 0.538 |
| Tatoeba-test.pol-urd.pol.urd | 5.6 | 0.381 |
| Tatoeba-test.pol-yid.pol.yid | 4.8 | 0.146 |
| Tatoeba-test.por-afr.por.afr | 48.1 | 0.653 |
| Tatoeba-test.por-ang.por.ang | 8.4 | 0.213 |
| Tatoeba-test.por-ast.por.ast | 42.7 | 0.835 |
| Tatoeba-test.por-bel.por.bel | 9.7 | 0.539 |
| Tatoeba-test.por-bul.por.bul | 41.5 | 0.569 |
| Tatoeba-test.por-cat.por.cat | 36.9 | 0.612 |
| Tatoeba-test.por-ces.por.ces | 29.0 | 0.526 |
| Tatoeba-test.por-cor.por.cor | 0.8 | 0.049 |
| Tatoeba-test.por-dan.por.dan | 51.4 | 0.668 |
| Tatoeba-test.por-deu.por.deu | 30.8 | 0.532 |
| Tatoeba-test.por-ell.por.ell | 33.8 | 0.556 |
| Tatoeba-test.por-eng.por.eng | 44.5 | 0.622 |
| Tatoeba-test.por-enm.por.enm | 10.7 | 0.190 |
| Tatoeba-test.por-fas.por.fas | 4.5 | 0.273 |
| Tatoeba-test.por-fra.por.fra | 43.0 | 0.625 |
| Tatoeba-test.por-fry.por.fry | 8.9 | 0.365 |
| Tatoeba-test.por-gcf.por.gcf | 16.0 | 0.079 |
| Tatoeba-test.por-gla.por.gla | 12.1 | 0.315 |
| Tatoeba-test.por-glg.por.glg | 49.2 | 0.700 |
| Tatoeba-test.por-grc.por.grc | 0.1 | 0.004 |
| Tatoeba-test.por-hbs.por.hbs | 39.2 | 0.575 |
| Tatoeba-test.por-isl.por.isl | 15.5 | 0.387 |
| Tatoeba-test.por-ita.por.ita | 39.9 | 0.637 |
| Tatoeba-test.por-kur.por.kur | 3.0 | 0.133 |
| Tatoeba-test.por-lad.por.lad | 0.6 | 0.172 |
| Tatoeba-test.por-lat.por.lat | 5.4 | 0.325 |
| Tatoeba-test.por-lit.por.lit | 18.8 | 0.418 |
| Tatoeba-test.por-ltz.por.ltz | 16.8 | 0.569 |
| Tatoeba-test.por-mkd.por.mkd | 27.3 | 0.571 |
| Tatoeba-test.por-msa.por.msa | 7.6 | 0.327 |
| Tatoeba-test.por-mwl.por.mwl | 30.5 | 0.559 |
| Tatoeba-test.por-nds.por.nds | 14.2 | 0.370 |
| Tatoeba-test.por-nld.por.nld | 35.6 | 0.558 |
| Tatoeba-test.por-nor.por.nor | 38.0 | 0.587 |
| Tatoeba-test.por-pol.por.pol | 25.5 | 0.510 |
| Tatoeba-test.por-roh.por.roh | 5.5 | 0.058 |
| Tatoeba-test.por-ron.por.ron | 32.0 | 0.557 |
| Tatoeba-test.por-rus.por.rus | 26.8 | 0.493 |
| Tatoeba-test.por-spa.por.spa | 48.7 | 0.686 |
| Tatoeba-test.por-swe.por.swe | 43.4 | 0.612 |
| Tatoeba-test.por-ukr.por.ukr | 27.5 | 0.500 |
| Tatoeba-test.por-yid.por.yid | 9.3 | 0.293 |
| Tatoeba-test.prg-deu.prg.deu | 2.2 | 0.183 |
| Tatoeba-test.prg-eng.prg.eng | 1.3 | 0.179 |
| Tatoeba-test.prg-fra.prg.fra | 2.3 | 0.183 |
| Tatoeba-test.prg-pol.prg.pol | 0.5 | 0.173 |
| Tatoeba-test.prg-spa.prg.spa | 3.4 | 0.200 |
| Tatoeba-test.pus-eng.pus.eng | 1.6 | 0.166 |
| Tatoeba-test.roh-deu.roh.deu | 8.3 | 0.311 |
| Tatoeba-test.roh-eng.roh.eng | 9.5 | 0.361 |
| Tatoeba-test.roh-fra.roh.fra | 8.8 | 0.415 |
| Tatoeba-test.roh-por.roh.por | 21.4 | 0.347 |
| Tatoeba-test.roh-spa.roh.spa | 13.3 | 0.434 |
| Tatoeba-test.rom-deu.rom.deu | 2.9 | 0.204 |
| Tatoeba-test.rom-eng.rom.eng | 5.3 | 0.243 |
| Tatoeba-test.rom-fra.rom.fra | 6.5 | 0.194 |
| Tatoeba-test.ron-afr.ron.afr | 30.2 | 0.667 |
| Tatoeba-test.ron-bul.ron.bul | 35.4 | 0.493 |
| Tatoeba-test.ron-cat.ron.cat | 23.6 | 0.542 |
| Tatoeba-test.ron-ces.ron.ces | 10.6 | 0.344 |
| Tatoeba-test.ron-dan.ron.dan | 12.7 | 0.652 |
| Tatoeba-test.ron-deu.ron.deu | 32.1 | 0.524 |
| Tatoeba-test.ron-eng.ron.eng | 38.4 | 0.566 |
| Tatoeba-test.ron-enm.ron.enm | 5.3 | 0.351 |
| Tatoeba-test.ron-fas.ron.fas | 7.3 | 0.338 |
| Tatoeba-test.ron-fra.ron.fra | 38.0 | 0.571 |
| Tatoeba-test.ron-gle.ron.gle | 10.7 | 0.116 |
| Tatoeba-test.ron-ita.ron.ita | 36.2 | 0.587 |
| Tatoeba-test.ron-lad.ron.lad | 2.4 | 0.233 |
| Tatoeba-test.ron-lat.ron.lat | 6.5 | 0.368 |
| Tatoeba-test.ron-mkd.ron.mkd | 27.5 | 0.484 |
| Tatoeba-test.ron-msa.ron.msa | 0.8 | 0.082 |
| Tatoeba-test.ron-nds.ron.nds | 9.7 | 0.168 |
| Tatoeba-test.ron-nld.ron.nld | 32.5 | 0.522 |
| Tatoeba-test.ron-nor.ron.nor | 45.2 | 0.656 |
| Tatoeba-test.ron-pol.ron.pol | 32.2 | 0.554 |
| Tatoeba-test.ron-por.ron.por | 33.6 | 0.577 |
| Tatoeba-test.ron-rus.ron.rus | 33.3 | 0.536 |
| Tatoeba-test.ron-slv.ron.slv | 19.0 | 0.113 |
| Tatoeba-test.ron-spa.ron.spa | 40.8 | 0.605 |
| Tatoeba-test.ron-swe.ron.swe | 12.7 | 0.288 |
| Tatoeba-test.ron-yid.ron.yid | 19.7 | 0.285 |
| Tatoeba-test.rue-eng.rue.eng | 18.7 | 0.359 |
| Tatoeba-test.rue-spa.rue.spa | 30.1 | 0.455 |
| Tatoeba-test.rus-afr.rus.afr | 34.7 | 0.540 |
| Tatoeba-test.rus-ang.rus.ang | 0.0 | 0.042 |
| Tatoeba-test.rus-ast.rus.ast | 42.7 | 0.835 |
| Tatoeba-test.rus-bel.rus.bel | 35.0 | 0.587 |
| Tatoeba-test.rus-bul.rus.bul | 30.8 | 0.534 |
| Tatoeba-test.rus-cat.rus.cat | 27.9 | 0.512 |
| Tatoeba-test.rus-ces.rus.ces | 33.8 | 0.537 |
| Tatoeba-test.rus-cor.rus.cor | 0.4 | 0.038 |
| Tatoeba-test.rus-cym.rus.cym | 7.6 | 0.384 |
| Tatoeba-test.rus-dan.rus.dan | 37.9 | 0.559 |
| Tatoeba-test.rus-deu.rus.deu | 31.3 | 0.528 |
| Tatoeba-test.rus-dsb.rus.dsb | 16.0 | 0.060 |
| Tatoeba-test.rus-ell.rus.ell | 29.0 | 0.512 |
| Tatoeba-test.rus-eng.rus.eng | 37.6 | 0.553 |
| Tatoeba-test.rus-enm.rus.enm | 1.6 | 0.138 |
| Tatoeba-test.rus-fas.rus.fas | 4.2 | 0.278 |
| Tatoeba-test.rus-fra.rus.fra | 33.0 | 0.524 |
| Tatoeba-test.rus-fry.rus.fry | 16.3 | 0.308 |
| Tatoeba-test.rus-gcf.rus.gcf | 10.7 | 0.045 |
| Tatoeba-test.rus-gla.rus.gla | 22.3 | 0.427 |
| Tatoeba-test.rus-gle.rus.gle | 5.9 | 0.310 |
| Tatoeba-test.rus-glg.rus.glg | 20.6 | 0.459 |
| Tatoeba-test.rus-gos.rus.gos | 1.5 | 0.152 |
| Tatoeba-test.rus-hbs.rus.hbs | 31.0 | 0.546 |
| Tatoeba-test.rus-hin.rus.hin | 5.5 | 0.326 |
| Tatoeba-test.rus-hye.rus.hye | 12.7 | 0.365 |
| Tatoeba-test.rus-isl.rus.isl | 9.0 | 0.320 |
| Tatoeba-test.rus-ita.rus.ita | 26.6 | 0.495 |
| Tatoeba-test.rus-kur.rus.kur | 5.6 | 0.210 |
| Tatoeba-test.rus-lad.rus.lad | 1.0 | 0.169 |
| Tatoeba-test.rus-lat.rus.lat | 7.9 | 0.328 |
| Tatoeba-test.rus-lav.rus.lav | 31.1 | 0.519 |
| Tatoeba-test.rus-lit.rus.lit | 22.0 | 0.489 |
| Tatoeba-test.rus-ltz.rus.ltz | 19.4 | 0.263 |
| Tatoeba-test.rus-mar.rus.mar | 19.0 | 0.217 |
| Tatoeba-test.rus-mkd.rus.mkd | 38.5 | 0.662 |
| Tatoeba-test.rus-msa.rus.msa | 6.6 | 0.305 |
| Tatoeba-test.rus-nds.rus.nds | 11.5 | 0.350 |
| Tatoeba-test.rus-nld.rus.nld | 31.1 | 0.517 |
| Tatoeba-test.rus-nor.rus.nor | 31.2 | 0.528 |
| Tatoeba-test.rus-oci.rus.oci | 4.9 | 0.261 |
| Tatoeba-test.rus-ori.rus.ori | 7.3 | 0.325 |
| Tatoeba-test.rus-orv.rus.orv | 0.0 | 0.008 |
| Tatoeba-test.rus-oss.rus.oss | 4.8 | 0.198 |
| Tatoeba-test.rus-pol.rus.pol | 31.3 | 0.540 |
| Tatoeba-test.rus-por.rus.por | 24.5 | 0.476 |
| Tatoeba-test.rus-ron.rus.ron | 25.7 | 0.492 |
| Tatoeba-test.rus-slv.rus.slv | 20.7 | 0.400 |
| Tatoeba-test.rus-spa.rus.spa | 30.9 | 0.526 |
| Tatoeba-test.rus-swe.rus.swe | 32.0 | 0.507 |
| Tatoeba-test.rus-ukr.rus.ukr | 41.1 | 0.622 |
| Tatoeba-test.rus-urd.rus.urd | 7.1 | 0.367 |
| Tatoeba-test.rus-yid.rus.yid | 4.7 | 0.253 |
| Tatoeba-test.san-eng.san.eng | 2.5 | 0.167 |
| Tatoeba-test.san-msa.san.msa | 11.7 | 0.217 |
| Tatoeba-test.scn-deu.scn.deu | 3.9 | 0.224 |
| Tatoeba-test.scn-eng.scn.eng | 40.7 | 0.420 |
| Tatoeba-test.scn-fra.scn.fra | 2.1 | 0.134 |
| Tatoeba-test.scn-spa.scn.spa | 3.4 | 0.244 |
| Tatoeba-test.sco-deu.sco.deu | 17.2 | 0.310 |
| Tatoeba-test.sco-eng.sco.eng | 32.8 | 0.524 |
| Tatoeba-test.sco-fra.sco.fra | 5.7 | 0.254 |
| Tatoeba-test.sco-lad.sco.lad | 5.3 | 0.023 |
| Tatoeba-test.sco-lat.sco.lat | 3.5 | 0.237 |
| Tatoeba-test.sco-nld.sco.nld | 11.9 | 0.335 |
| Tatoeba-test.sgs-eng.sgs.eng | 23.7 | 0.300 |
| Tatoeba-test.sgs-spa.sgs.spa | 0.0 | 0.146 |
| Tatoeba-test.sin-eng.sin.eng | 14.1 | 0.313 |
| Tatoeba-test.slv-ces.slv.ces | 33.2 | 0.528 |
| Tatoeba-test.slv-deu.slv.deu | 33.4 | 0.518 |
| Tatoeba-test.slv-eng.slv.eng | 29.9 | 0.489 |
| Tatoeba-test.slv-fra.slv.fra | 19.5 | 0.405 |
| Tatoeba-test.slv-ita.slv.ita | 28.6 | 0.499 |
| Tatoeba-test.slv-lad.slv.lad | 5.5 | 0.296 |
| Tatoeba-test.slv-lav.slv.lav | 18.0 | 0.546 |
| Tatoeba-test.slv-lit.slv.lit | 18.0 | 0.452 |
| Tatoeba-test.slv-nor.slv.nor | 20.3 | 0.406 |
| Tatoeba-test.slv-pol.slv.pol | 33.1 | 0.541 |
| Tatoeba-test.slv-ron.slv.ron | 12.4 | 0.348 |
| Tatoeba-test.slv-rus.slv.rus | 33.4 | 0.519 |
| Tatoeba-test.slv-spa.slv.spa | 32.9 | 0.503 |
| Tatoeba-test.slv-swe.slv.swe | 14.8 | 0.095 |
| Tatoeba-test.slv-ukr.slv.ukr | 30.1 | 0.471 |
| Tatoeba-test.snd-eng.snd.eng | 12.7 | 0.377 |
| Tatoeba-test.spa-afr.spa.afr | 46.9 | 0.624 |
| Tatoeba-test.spa-ang.spa.ang | 1.1 | 0.143 |
| Tatoeba-test.spa-arg.spa.arg | 21.6 | 0.446 |
| Tatoeba-test.spa-ast.spa.ast | 28.1 | 0.526 |
| Tatoeba-test.spa-bel.spa.bel | 22.8 | 0.466 |
| Tatoeba-test.spa-ben.spa.ben | 16.9 | 0.442 |
| Tatoeba-test.spa-bul.spa.bul | 30.8 | 0.510 |
| Tatoeba-test.spa-cat.spa.cat | 49.1 | 0.696 |
| Tatoeba-test.spa-ces.spa.ces | 27.2 | 0.497 |
| Tatoeba-test.spa-cor.spa.cor | 0.5 | 0.049 |
| Tatoeba-test.spa-csb.spa.csb | 5.3 | 0.204 |
| Tatoeba-test.spa-cym.spa.cym | 22.4 | 0.476 |
| Tatoeba-test.spa-dan.spa.dan | 39.3 | 0.581 |
| Tatoeba-test.spa-deu.spa.deu | 30.9 | 0.531 |
| Tatoeba-test.spa-dsb.spa.dsb | 0.7 | 0.109 |
| Tatoeba-test.spa-egl.spa.egl | 0.9 | 0.060 |
| Tatoeba-test.spa-ell.spa.ell | 28.9 | 0.487 |
| Tatoeba-test.spa-eng.spa.eng | 41.0 | 0.595 |
| Tatoeba-test.spa-enm.spa.enm | 13.9 | 0.188 |
| Tatoeba-test.spa-fas.spa.fas | 7.9 | 0.244 |
| Tatoeba-test.spa-fra.spa.fra | 41.4 | 0.610 |
| Tatoeba-test.spa-fry.spa.fry | 15.8 | 0.397 |
| Tatoeba-test.spa-gcf.spa.gcf | 7.0 | 0.060 |
| Tatoeba-test.spa-gla.spa.gla | 7.4 | 0.303 |
| Tatoeba-test.spa-gle.spa.gle | 22.2 | 0.415 |
| Tatoeba-test.spa-glg.spa.glg | 48.8 | 0.683 |
| Tatoeba-test.spa-gos.spa.gos | 1.7 | 0.181 |
| Tatoeba-test.spa-got.spa.got | 0.3 | 0.010 |
| Tatoeba-test.spa-grc.spa.grc | 0.1 | 0.005 |
| Tatoeba-test.spa-gsw.spa.gsw | 5.6 | 0.051 |
| Tatoeba-test.spa-guj.spa.guj | 15.0 | 0.365 |
| Tatoeba-test.spa-hat.spa.hat | 19.9 | 0.409 |
| Tatoeba-test.spa-hbs.spa.hbs | 33.2 | 0.529 |
| Tatoeba-test.spa-hin.spa.hin | 16.1 | 0.331 |
| Tatoeba-test.spa-hsb.spa.hsb | 5.1 | 0.240 |
| Tatoeba-test.spa-hye.spa.hye | 13.5 | 0.357 |
| Tatoeba-test.spa-isl.spa.isl | 18.0 | 0.410 |
| Tatoeba-test.spa-ita.spa.ita | 42.7 | 0.646 |
| Tatoeba-test.spa-ksh.spa.ksh | 0.4 | 0.088 |
| Tatoeba-test.spa-kur.spa.kur | 5.6 | 0.237 |
| Tatoeba-test.spa-lad.spa.lad | 0.9 | 0.157 |
| Tatoeba-test.spa-lat.spa.lat | 9.0 | 0.382 |
| Tatoeba-test.spa-lav.spa.lav | 23.7 | 0.510 |
| Tatoeba-test.spa-lit.spa.lit | 22.4 | 0.477 |
| Tatoeba-test.spa-lld.spa.lld | 0.4 | 0.119 |
| Tatoeba-test.spa-ltz.spa.ltz | 34.1 | 0.531 |
| Tatoeba-test.spa-mai.spa.mai | 29.4 | 0.416 |
| Tatoeba-test.spa-mkd.spa.mkd | 37.1 | 0.568 |
| Tatoeba-test.spa-msa.spa.msa | 14.0 | 0.405 |
| Tatoeba-test.spa-nds.spa.nds | 15.4 | 0.390 |
| Tatoeba-test.spa-nld.spa.nld | 34.0 | 0.550 |
| Tatoeba-test.spa-nor.spa.nor | 41.1 | 0.608 |
| Tatoeba-test.spa-oci.spa.oci | 8.0 | 0.353 |
| Tatoeba-test.spa-orv.spa.orv | 0.4 | 0.010 |
| Tatoeba-test.spa-pcd.spa.pcd | 0.2 | 0.060 |
| Tatoeba-test.spa-pms.spa.pms | 0.6 | 0.122 |
| Tatoeba-test.spa-pol.spa.pol | 26.3 | 0.498 |
| Tatoeba-test.spa-por.spa.por | 41.6 | 0.638 |
| Tatoeba-test.spa-prg.spa.prg | 0.3 | 0.095 |
| Tatoeba-test.spa-roh.spa.roh | 4.0 | 0.219 |
| Tatoeba-test.spa-ron.spa.ron | 31.9 | 0.550 |
| Tatoeba-test.spa-rue.spa.rue | 0.2 | 0.013 |
| Tatoeba-test.spa-rus.spa.rus | 29.4 | 0.510 |
| Tatoeba-test.spa-scn.spa.scn | 1.6 | 0.086 |
| Tatoeba-test.spa-sgs.spa.sgs | 16.0 | 0.111 |
| Tatoeba-test.spa-slv.spa.slv | 9.2 | 0.269 |
| Tatoeba-test.spa-stq.spa.stq | 8.4 | 0.375 |
| Tatoeba-test.spa-swe.spa.swe | 39.5 | 0.572 |
| Tatoeba-test.spa-ukr.spa.ukr | 27.8 | 0.495 |
| Tatoeba-test.spa-wln.spa.wln | 2.9 | 0.220 |
| Tatoeba-test.spa-yid.spa.yid | 10.0 | 0.296 |
| Tatoeba-test.sqi-eng.sqi.eng | 30.9 | 0.499 |
| Tatoeba-test.sqi-fra.sqi.fra | 29.9 | 0.545 |
| Tatoeba-test.sqi-ita.sqi.ita | 24.5 | 0.484 |
| Tatoeba-test.srd-fra.srd.fra | 5.8 | 0.347 |
| Tatoeba-test.stq-deu.stq.deu | 16.7 | 0.426 |
| Tatoeba-test.stq-eng.stq.eng | 8.4 | 0.370 |
| Tatoeba-test.stq-frr.stq.frr | 0.6 | 0.032 |
| Tatoeba-test.stq-fry.stq.fry | 9.3 | 0.283 |
| Tatoeba-test.stq-gos.stq.gos | 0.3 | 0.126 |
| Tatoeba-test.stq-isl.stq.isl | 0.0 | 0.102 |
| Tatoeba-test.stq-ltz.stq.ltz | 4.0 | 0.175 |
| Tatoeba-test.stq-nld.stq.nld | 13.2 | 0.398 |
| Tatoeba-test.stq-spa.stq.spa | 7.0 | 0.345 |
| Tatoeba-test.stq-yid.stq.yid | 5.0 | 0.110 |
| Tatoeba-test.swe-afr.swe.afr | 63.1 | 0.831 |
| Tatoeba-test.swe-bul.swe.bul | 35.4 | 0.529 |
| Tatoeba-test.swe-cat.swe.cat | 38.5 | 0.528 |
| Tatoeba-test.swe-ces.swe.ces | 32.8 | 0.380 |
| Tatoeba-test.swe-dan.swe.dan | 54.5 | 0.702 |
| Tatoeba-test.swe-deu.swe.deu | 36.7 | 0.570 |
| Tatoeba-test.swe-ell.swe.ell | 32.9 | 0.541 |
| Tatoeba-test.swe-eng.swe.eng | 44.9 | 0.606 |
| Tatoeba-test.swe-fao.swe.fao | 0.0 | 0.877 |
| Tatoeba-test.swe-fra.swe.fra | 43.2 | 0.605 |
| Tatoeba-test.swe-fry.swe.fry | 42.7 | 0.402 |
| Tatoeba-test.swe-gos.swe.gos | 4.8 | 0.253 |
| Tatoeba-test.swe-hbs.swe.hbs | 39.3 | 0.591 |
| Tatoeba-test.swe-hin.swe.hin | 31.6 | 0.617 |
| Tatoeba-test.swe-isl.swe.isl | 21.2 | 0.559 |
| Tatoeba-test.swe-ita.swe.ita | 33.1 | 0.548 |
| Tatoeba-test.swe-kur.swe.kur | 1.4 | 0.144 |
| Tatoeba-test.swe-lad.swe.lad | 6.6 | 0.373 |
| Tatoeba-test.swe-lat.swe.lat | 4.5 | 0.453 |
| Tatoeba-test.swe-lav.swe.lav | 73.4 | 0.828 |
| Tatoeba-test.swe-ltz.swe.ltz | 25.5 | 0.440 |
| Tatoeba-test.swe-mkd.swe.mkd | 0.0 | 0.124 |
| Tatoeba-test.swe-nld.swe.nld | 71.9 | 0.742 |
| Tatoeba-test.swe-nor.swe.nor | 59.5 | 0.742 |
| Tatoeba-test.swe-pol.swe.pol | 25.9 | 0.497 |
| Tatoeba-test.swe-por.swe.por | 31.3 | 0.546 |
| Tatoeba-test.swe-ron.swe.ron | 100.0 | 1.000 |
| Tatoeba-test.swe-rus.swe.rus | 28.6 | 0.495 |
| Tatoeba-test.swe-slv.swe.slv | 19.0 | 0.116 |
| Tatoeba-test.swe-spa.swe.spa | 37.1 | 0.569 |
| Tatoeba-test.swe-yid.swe.yid | 13.9 | 0.336 |
| Tatoeba-test.swg-ces.swg.ces | 16.5 | 0.438 |
| Tatoeba-test.swg-dan.swg.dan | 20.1 | 0.468 |
| Tatoeba-test.swg-deu.swg.deu | 8.0 | 0.316 |
| Tatoeba-test.swg-eng.swg.eng | 13.0 | 0.300 |
| Tatoeba-test.swg-fra.swg.fra | 15.3 | 0.296 |
| Tatoeba-test.swg-nds.swg.nds | 0.9 | 0.199 |
| Tatoeba-test.swg-nld.swg.nld | 4.9 | 0.287 |
| Tatoeba-test.swg-yid.swg.yid | 1.9 | 0.194 |
| Tatoeba-test.tgk-deu.tgk.deu | 45.2 | 0.574 |
| Tatoeba-test.tgk-eng.tgk.eng | 7.8 | 0.271 |
| Tatoeba-test.tgk-fra.tgk.fra | 9.6 | 0.273 |
| Tatoeba-test.tly-eng.tly.eng | 0.9 | 0.102 |
| Tatoeba-test.tly-fra.tly.fra | 4.4 | 0.054 |
| Tatoeba-test.ukr-afr.ukr.afr | 48.3 | 0.646 |
| Tatoeba-test.ukr-ang.ukr.ang | 1.4 | 0.034 |
| Tatoeba-test.ukr-bel.ukr.bel | 36.7 | 0.601 |
| Tatoeba-test.ukr-bul.ukr.bul | 40.4 | 0.601 |
| Tatoeba-test.ukr-cat.ukr.cat | 33.9 | 0.538 |
| Tatoeba-test.ukr-ces.ukr.ces | 33.1 | 0.524 |
| Tatoeba-test.ukr-dan.ukr.dan | 25.8 | 0.469 |
| Tatoeba-test.ukr-deu.ukr.deu | 34.0 | 0.543 |
| Tatoeba-test.ukr-ell.ukr.ell | 23.0 | 0.493 |
| Tatoeba-test.ukr-eng.ukr.eng | 36.1 | 0.538 |
| Tatoeba-test.ukr-enm.ukr.enm | 3.6 | 0.400 |
| Tatoeba-test.ukr-fas.ukr.fas | 5.3 | 0.240 |
| Tatoeba-test.ukr-fra.ukr.fra | 32.0 | 0.519 |
| Tatoeba-test.ukr-fry.ukr.fry | 13.6 | 0.318 |
| Tatoeba-test.ukr-gos.ukr.gos | 3.8 | 0.199 |
| Tatoeba-test.ukr-hbs.ukr.hbs | 33.4 | 0.547 |
| Tatoeba-test.ukr-ita.ukr.ita | 32.6 | 0.546 |
| Tatoeba-test.ukr-lad.ukr.lad | 1.4 | 0.166 |
| Tatoeba-test.ukr-lat.ukr.lat | 8.0 | 0.314 |
| Tatoeba-test.ukr-lav.ukr.lav | 10.7 | 0.520 |
| Tatoeba-test.ukr-lit.ukr.lit | 59.9 | 0.631 |
| Tatoeba-test.ukr-mkd.ukr.mkd | 38.0 | 0.718 |
| Tatoeba-test.ukr-msa.ukr.msa | 2.5 | 0.213 |
| Tatoeba-test.ukr-nds.ukr.nds | 11.0 | 0.368 |
| Tatoeba-test.ukr-nld.ukr.nld | 33.0 | 0.524 |
| Tatoeba-test.ukr-nor.ukr.nor | 40.4 | 0.574 |
| Tatoeba-test.ukr-orv.ukr.orv | 0.1 | 0.008 |
| Tatoeba-test.ukr-pol.ukr.pol | 32.7 | 0.553 |
| Tatoeba-test.ukr-por.ukr.por | 26.8 | 0.496 |
| Tatoeba-test.ukr-rus.ukr.rus | 45.7 | 0.651 |
| Tatoeba-test.ukr-slv.ukr.slv | 11.8 | 0.263 |
| Tatoeba-test.ukr-spa.ukr.spa | 31.7 | 0.528 |
| Tatoeba-test.ukr-yid.ukr.yid | 3.6 | 0.196 |
| Tatoeba-test.urd-dan.urd.dan | 36.7 | 0.586 |
| Tatoeba-test.urd-deu.urd.deu | 17.1 | 0.451 |
| Tatoeba-test.urd-eng.urd.eng | 17.1 | 0.375 |
| Tatoeba-test.urd-fra.urd.fra | 38.1 | 0.565 |
| Tatoeba-test.urd-hbs.urd.hbs | 0.0 | 1.000 |
| Tatoeba-test.urd-hin.urd.hin | 14.0 | 0.404 |
| Tatoeba-test.urd-msa.urd.msa | 1.5 | 0.014 |
| Tatoeba-test.urd-pol.urd.pol | 68.7 | 0.695 |
| Tatoeba-test.urd-rus.urd.rus | 25.8 | 0.314 |
| Tatoeba-test.vec-eng.vec.eng | 13.6 | 0.319 |
| Tatoeba-test.vec-fra.vec.fra | 48.3 | 0.680 |
| Tatoeba-test.vec-ita.vec.ita | 28.3 | 0.454 |
| Tatoeba-test.wln-eng.wln.eng | 4.4 | 0.206 |
| Tatoeba-test.wln-fra.wln.fra | 8.0 | 0.282 |
| Tatoeba-test.wln-nld.wln.nld | 5.2 | 0.237 |
| Tatoeba-test.wln-spa.wln.spa | 9.9 | 0.395 |
| Tatoeba-test.yid-afr.yid.afr | 35.4 | 0.868 |
| Tatoeba-test.yid-ang.yid.ang | 0.8 | 0.077 |
| Tatoeba-test.yid-bel.yid.bel | 4.9 | 0.240 |
| Tatoeba-test.yid-bul.yid.bul | 11.3 | 0.054 |
| Tatoeba-test.yid-cat.yid.cat | 19.0 | 0.583 |
| Tatoeba-test.yid-ces.yid.ces | 5.4 | 0.320 |
| Tatoeba-test.yid-cym.yid.cym | 6.3 | 0.239 |
| Tatoeba-test.yid-dan.yid.dan | 12.8 | 0.341 |
| Tatoeba-test.yid-deu.yid.deu | 17.5 | 0.382 |
| Tatoeba-test.yid-ell.yid.ell | 42.7 | 0.797 |
| Tatoeba-test.yid-eng.yid.eng | 15.5 | 0.338 |
| Tatoeba-test.yid-enm.yid.enm | 2.3 | 0.176 |
| Tatoeba-test.yid-fas.yid.fas | 4.5 | 0.207 |
| Tatoeba-test.yid-fra.yid.fra | 18.9 | 0.367 |
| Tatoeba-test.yid-fry.yid.fry | 6.0 | 0.156 |
| Tatoeba-test.yid-gle.yid.gle | 32.2 | 0.448 |
| Tatoeba-test.yid-gos.yid.gos | 1.3 | 0.142 |
| Tatoeba-test.yid-ita.yid.ita | 15.3 | 0.363 |
| Tatoeba-test.yid-kur.yid.kur | 3.2 | 0.166 |
| Tatoeba-test.yid-lad.yid.lad | 0.1 | 0.090 |
| Tatoeba-test.yid-lat.yid.lat | 1.8 | 0.206 |
| Tatoeba-test.yid-lit.yid.lit | 27.8 | 0.560 |
| Tatoeba-test.yid-ltz.yid.ltz | 4.2 | 0.316 |
| Tatoeba-test.yid-nds.yid.nds | 24.6 | 0.466 |
| Tatoeba-test.yid-nld.yid.nld | 24.5 | 0.431 |
| Tatoeba-test.yid-nor.yid.nor | 5.0 | 0.318 |
| Tatoeba-test.yid-oci.yid.oci | 19.0 | 0.390 |
| Tatoeba-test.yid-pol.yid.pol | 15.0 | 0.258 |
| Tatoeba-test.yid-por.yid.por | 7.4 | 0.326 |
| Tatoeba-test.yid-ron.yid.ron | 12.3 | 0.325 |
| Tatoeba-test.yid-rus.yid.rus | 14.2 | 0.324 |
| Tatoeba-test.yid-spa.yid.spa | 16.1 | 0.369 |
| Tatoeba-test.yid-stq.yid.stq | 3.2 | 0.125 |
| Tatoeba-test.yid-swe.yid.swe | 55.9 | 0.672 |
| Tatoeba-test.yid-swg.yid.swg | 0.3 | 0.083 |
| Tatoeba-test.yid-ukr.yid.ukr | 7.2 | 0.383 |
| Tatoeba-test.zza-asm.zza.asm | 0.0 | 0.102 |
| Tatoeba-test.zza-eng.zza.eng | 1.9 | 0.135 |
### System Info:
- hf_name: ine-ine
- source_languages: ine
- target_languages: ine
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ine-ine/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']
- src_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'}
- tgt_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ine-ine/opus-2020-07-27.test.txt
- src_alpha3: ine
- tgt_alpha3: ine
- short_pair: ine-ine
- chrF2_score: 0.509
- bleu: 30.8
- brevity_penalty: 0.9890000000000001
- ref_len: 69953.0
- src_name: Indo-European languages
- tgt_name: Indo-European languages
- train_date: 2020-07-27
- src_alpha2: ine
- tgt_alpha2: ine
- prefer_old: False
- long_pair: ine-ine
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["ca", "es", "os", "ro", "fy", "cy", "sc", "is", "yi", "lb", "an", "sq", "fr", "ht", "rm", "ps", "af", "uk", "sl", "lt", "bg", "be", "gd", "si", "en", "br", "mk", "or", "mr", "ru", "fo", "co", "oc", "pl", "gl", "nb", "bn", "id", "hy", "da", "gv", "nl", "pt", "hi", "as", "kw", "ga", "sv", "gu", "wa", "lv", "el", "it", "hr", "ur", "nn", "de", "cs", "ine"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-ine-ine | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ca",
"es",
"os",
"ro",
"fy",
"cy",
"sc",
"is",
"yi",
"lb",
"an",
"sq",
"fr",
"ht",
"rm",
"ps",
"af",
"uk",
"sl",
"lt",
"bg",
"be",
"gd",
"si",
"en",
"br",
"mk",
"or",
"mr",
"ru",
"fo",
"co",
"oc",
"pl",
"gl",
"nb",
"bn",
"id",
"hy",
"da",
"gv",
"nl",
"pt",
"hi",
"as",
"kw",
"ga",
"sv",
"gu",
"wa",
"lv",
"el",
"it",
"hr",
"ur",
"nn",
"de",
"cs",
"ine",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca",
"es",
"os",
"ro",
"fy",
"cy",
"sc",
"is",
"yi",
"lb",
"an",
"sq",
"fr",
"ht",
"rm",
"ps",
"af",
"uk",
"sl",
"lt",
"bg",
"be",
"gd",
"si",
"en",
"br",
"mk",
"or",
"mr",
"ru",
"fo",
"co",
"oc",
"pl",
"gl",
"nb",
"bn",
"id",
"hy",
"da",
"gv",
"nl",
"pt",
"hi",
"as",
"kw",
"ga",
"sv",
"gu",
"wa",
"lv",
"el",
"it",
"hr",
"ur",
"nn",
"de",
"cs",
"ine"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #os #ro #fy #cy #sc #is #yi #lb #an #sq #fr #ht #rm #ps #af #uk #sl #lt #bg #be #gd #si #en #br #mk #or #mr #ru #fo #co #oc #pl #gl #nb #bn #id #hy #da #gv #nl #pt #hi #as #kw #ga #sv #gu #wa #lv #el #it #hr #ur #nn #de #cs #ine #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ine-ine
* source group: Indo-European languages
* target group: Indo-European languages
* OPUS readme: ine-ine
* model: transformer
* source language(s): afr afr\_Arab aln ang\_Latn arg asm ast awa bel bel\_Latn ben bho bjn bos\_Latn bre bul bul\_Latn cat ces cor cos csb\_Latn cym dan deu dsb egl ell eng enm\_Latn ext fao fra frm\_Latn frr fry gcf\_Latn gla gle glg glv gom gos got\_Goth grc\_Grek gsw guj hat hif\_Latn hin hrv hsb hye hye\_Latn ind isl ita jdt\_Cyrl ksh kur\_Arab kur\_Latn lad lad\_Latn lat\_Grek lat\_Latn lav lij lit lld\_Latn lmo ltg ltz mai mar max\_Latn mfe min mkd mwl nds nld nno nob nob\_Hebr non\_Latn npi oci ori orv\_Cyrl oss pan\_Guru pap pcd pdc pes pes\_Latn pes\_Thaa pms pnb pol por prg\_Latn pus roh rom ron rue rus rus\_Latn san\_Deva scn sco sgs sin slv snd\_Arab spa sqi srd srp\_Cyrl srp\_Latn stq swe swg tgk\_Cyrl tly\_Latn tmw\_Latn ukr urd vec wln yid zlm\_Latn zsm\_Latn zza
* target language(s): afr afr\_Arab aln ang\_Latn arg asm ast awa bel bel\_Latn ben bho bjn bos\_Latn bre bul bul\_Latn cat ces cor cos csb\_Latn cym dan deu dsb egl ell eng enm\_Latn ext fao fra frm\_Latn frr fry gcf\_Latn gla gle glg glv gom gos got\_Goth grc\_Grek gsw guj hat hif\_Latn hin hrv hsb hye hye\_Latn ind isl ita jdt\_Cyrl ksh kur\_Arab kur\_Latn lad lad\_Latn lat\_Grek lat\_Latn lav lij lit lld\_Latn lmo ltg ltz mai mar max\_Latn mfe min mkd mwl nds nld nno nob nob\_Hebr non\_Latn npi oci ori orv\_Cyrl oss pan\_Guru pap pcd pdc pes pes\_Latn pes\_Thaa pms pnb pol por prg\_Latn pus roh rom ron rue rus rus\_Latn san\_Deva scn sco sgs sin slv snd\_Arab spa sqi srd srp\_Cyrl srp\_Latn stq swe swg tgk\_Cyrl tly\_Latn tmw\_Latn ukr urd vec wln yid zlm\_Latn zsm\_Latn zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: euelections\_dev2019.URL, BLEU: 19.2, chr-F: 0.482
testset: euelections\_dev2019.URL, BLEU: 15.8, chr-F: 0.470
testset: URL, BLEU: 4.0, chr-F: 0.245
testset: URL, BLEU: 6.8, chr-F: 0.301
testset: URL, BLEU: 17.3, chr-F: 0.470
testset: URL, BLEU: 26.0, chr-F: 0.534
testset: URL, BLEU: 12.1, chr-F: 0.416
testset: URL, BLEU: 15.9, chr-F: 0.443
testset: URL, BLEU: 2.5, chr-F: 0.200
testset: URL, BLEU: 7.1, chr-F: 0.302
testset: URL, BLEU: 10.6, chr-F: 0.407
testset: URL, BLEU: 14.9, chr-F: 0.428
testset: URL, BLEU: 22.6, chr-F: 0.507
testset: URL, BLEU: 23.5, chr-F: 0.495
testset: URL, BLEU: 25.1, chr-F: 0.528
testset: URL, BLEU: 26.4, chr-F: 0.517
testset: URL, BLEU: 13.1, chr-F: 0.432
testset: URL, BLEU: 18.4, chr-F: 0.463
testset: URL, BLEU: 15.5, chr-F: 0.452
testset: URL, BLEU: 14.8, chr-F: 0.458
testset: URL, BLEU: 18.4, chr-F: 0.462
testset: URL, BLEU: 10.5, chr-F: 0.381
testset: URL, BLEU: 19.5, chr-F: 0.467
testset: URL, BLEU: 16.4, chr-F: 0.459
testset: URL, BLEU: 15.5, chr-F: 0.456
testset: URL, BLEU: 18.4, chr-F: 0.466
testset: URL, BLEU: 11.9, chr-F: 0.394
testset: URL, BLEU: 13.9, chr-F: 0.446
testset: URL, BLEU: 20.7, chr-F: 0.502
testset: URL, BLEU: 21.3, chr-F: 0.516
testset: URL, BLEU: 22.3, chr-F: 0.506
testset: URL, BLEU: 11.5, chr-F: 0.390
testset: URL, BLEU: 13.4, chr-F: 0.437
testset: URL, BLEU: 22.8, chr-F: 0.499
testset: URL, BLEU: 22.2, chr-F: 0.533
testset: URL, BLEU: 26.2, chr-F: 0.539
testset: URL, BLEU: 12.3, chr-F: 0.397
testset: URL, BLEU: 13.3, chr-F: 0.436
testset: URL, BLEU: 24.7, chr-F: 0.517
testset: URL, BLEU: 24.0, chr-F: 0.528
testset: URL, BLEU: 26.3, chr-F: 0.537
testset: URL, BLEU: 12.0, chr-F: 0.400
testset: URL, BLEU: 13.9, chr-F: 0.440
testset: URL, BLEU: 22.9, chr-F: 0.509
testset: URL, BLEU: 24.2, chr-F: 0.538
testset: URL, BLEU: 24.5, chr-F: 0.547
testset: URL, BLEU: 12.0, chr-F: 0.422
testset: URL, BLEU: 15.1, chr-F: 0.444
testset: URL, BLEU: 16.4, chr-F: 0.451
testset: URL, BLEU: 9.9, chr-F: 0.369
testset: URL, BLEU: 18.0, chr-F: 0.456
testset: URL, BLEU: 16.4, chr-F: 0.453
testset: URL, BLEU: 17.0, chr-F: 0.452
testset: URL, BLEU: 10.5, chr-F: 0.375
testset: URL, BLEU: 14.5, chr-F: 0.439
testset: URL, BLEU: 18.9, chr-F: 0.481
testset: URL, BLEU: 20.9, chr-F: 0.491
testset: URL, BLEU: 10.7, chr-F: 0.380
testset: URL, BLEU: 13.8, chr-F: 0.435
testset: URL, BLEU: 19.8, chr-F: 0.479
testset: URL, BLEU: 24.8, chr-F: 0.522
testset: URL, BLEU: 11.0, chr-F: 0.380
testset: URL, BLEU: 14.0, chr-F: 0.433
testset: URL, BLEU: 20.6, chr-F: 0.488
testset: URL, BLEU: 23.3, chr-F: 0.518
testset: URL, BLEU: 12.9, chr-F: 0.427
testset: URL, BLEU: 17.0, chr-F: 0.456
testset: URL, BLEU: 15.4, chr-F: 0.447
testset: URL, BLEU: 14.9, chr-F: 0.454
testset: URL, BLEU: 17.1, chr-F: 0.458
testset: URL, BLEU: 10.3, chr-F: 0.370
testset: URL, BLEU: 17.7, chr-F: 0.458
testset: URL, BLEU: 15.9, chr-F: 0.447
testset: URL, BLEU: 14.7, chr-F: 0.446
testset: URL, BLEU: 17.2, chr-F: 0.453
testset: URL, BLEU: 11.0, chr-F: 0.387
testset: URL, BLEU: 13.6, chr-F: 0.440
testset: URL, BLEU: 20.3, chr-F: 0.496
testset: URL, BLEU: 20.8, chr-F: 0.509
testset: URL, BLEU: 21.9, chr-F: 0.503
testset: URL, BLEU: 11.3, chr-F: 0.385
testset: URL, BLEU: 14.0, chr-F: 0.436
testset: URL, BLEU: 21.8, chr-F: 0.496
testset: URL, BLEU: 22.1, chr-F: 0.526
testset: URL, BLEU: 24.8, chr-F: 0.525
testset: URL, BLEU: 11.5, chr-F: 0.382
testset: URL, BLEU: 13.3, chr-F: 0.430
testset: URL, BLEU: 23.6, chr-F: 0.508
testset: URL, BLEU: 22.9, chr-F: 0.516
testset: URL, BLEU: 25.4, chr-F: 0.529
testset: URL, BLEU: 11.3, chr-F: 0.386
testset: URL, BLEU: 13.5, chr-F: 0.434
testset: URL, BLEU: 22.4, chr-F: 0.500
testset: URL, BLEU: 23.2, chr-F: 0.520
testset: URL, BLEU: 24.0, chr-F: 0.538
testset: URL, BLEU: 13.1, chr-F: 0.431
testset: URL, BLEU: 16.9, chr-F: 0.459
testset: URL, BLEU: 15.6, chr-F: 0.450
testset: URL, BLEU: 18.5, chr-F: 0.467
testset: URL, BLEU: 11.4, chr-F: 0.387
testset: URL, BLEU: 19.6, chr-F: 0.481
testset: URL, BLEU: 17.7, chr-F: 0.471
testset: URL, BLEU: 20.0, chr-F: 0.478
testset: URL, BLEU: 11.4, chr-F: 0.393
testset: URL, BLEU: 15.1, chr-F: 0.448
testset: URL, BLEU: 21.4, chr-F: 0.506
testset: URL, BLEU: 25.0, chr-F: 0.525
testset: URL, BLEU: 11.1, chr-F: 0.386
testset: URL, BLEU: 14.2, chr-F: 0.442
testset: URL, BLEU: 22.6, chr-F: 0.507
testset: URL, BLEU: 26.6, chr-F: 0.542
testset: URL, BLEU: 12.2, chr-F: 0.396
testset: URL, BLEU: 15.1, chr-F: 0.445
testset: URL, BLEU: 24.3, chr-F: 0.521
testset: URL, BLEU: 24.8, chr-F: 0.536
testset: URL, BLEU: 13.1, chr-F: 0.423
testset: URL, BLEU: 18.2, chr-F: 0.463
testset: URL, BLEU: 17.4, chr-F: 0.458
testset: URL, BLEU: 18.9, chr-F: 0.464
testset: URL, BLEU: 11.2, chr-F: 0.376
testset: URL, BLEU: 18.3, chr-F: 0.464
testset: URL, BLEU: 17.0, chr-F: 0.457
testset: URL, BLEU: 19.2, chr-F: 0.464
testset: URL, BLEU: 12.4, chr-F: 0.395
testset: URL, BLEU: 14.5, chr-F: 0.437
testset: URL, BLEU: 23.6, chr-F: 0.522
testset: URL, BLEU: 26.6, chr-F: 0.530
testset: URL, BLEU: 12.5, chr-F: 0.394
testset: URL, BLEU: 14.2, chr-F: 0.433
testset: URL, BLEU: 24.3, chr-F: 0.521
testset: URL, BLEU: 29.1, chr-F: 0.551
testset: URL, BLEU: 12.3, chr-F: 0.390
testset: URL, BLEU: 14.4, chr-F: 0.435
testset: URL, BLEU: 25.0, chr-F: 0.521
testset: URL, BLEU: 25.6, chr-F: 0.537
testset: URL, BLEU: 13.1, chr-F: 0.420
testset: URL, BLEU: 17.5, chr-F: 0.457
testset: URL, BLEU: 16.8, chr-F: 0.452
testset: URL, BLEU: 11.2, chr-F: 0.379
testset: URL, BLEU: 18.1, chr-F: 0.457
testset: URL, BLEU: 11.2, chr-F: 0.368
testset: URL, BLEU: 19.4, chr-F: 0.472
testset: URL, BLEU: 17.7, chr-F: 0.464
testset: URL, BLEU: 10.3, chr-F: 0.370
testset: URL, BLEU: 19.6, chr-F: 0.467
testset: URL, BLEU: 11.1, chr-F: 0.375
testset: URL, BLEU: 14.6, chr-F: 0.440
testset: URL, BLEU: 22.4, chr-F: 0.512
testset: URL, BLEU: 17.6, chr-F: 0.452
testset: URL, BLEU: 26.5, chr-F: 0.527
testset: URL, BLEU: 11.9, chr-F: 0.383
testset: URL, BLEU: 14.6, chr-F: 0.437
testset: URL, BLEU: 24.3, chr-F: 0.516
testset: URL, BLEU: 11.9, chr-F: 0.393
testset: URL, BLEU: 28.3, chr-F: 0.545
testset: URL, BLEU: 9.0, chr-F: 0.340
testset: URL, BLEU: 10.0, chr-F: 0.383
testset: URL, BLEU: 22.4, chr-F: 0.492
testset: URL, BLEU: 13.3, chr-F: 0.427
testset: URL, BLEU: 16.6, chr-F: 0.437
testset: URL, BLEU: 11.9, chr-F: 0.381
testset: URL, BLEU: 14.8, chr-F: 0.440
testset: URL, BLEU: 26.5, chr-F: 0.534
testset: URL, BLEU: 25.0, chr-F: 0.539
testset: URL, BLEU: 12.4, chr-F: 0.401
testset: URL, BLEU: 14.3, chr-F: 0.434
testset: URL, BLEU: 18.5, chr-F: 0.463
testset: URL, BLEU: 16.6, chr-F: 0.444
testset: URL, BLEU: 13.6, chr-F: 0.406
testset: URL, BLEU: 18.2, chr-F: 0.455
testset: URL, BLEU: 11.7, chr-F: 0.380
testset: URL, BLEU: 20.9, chr-F: 0.481
testset: URL, BLEU: 18.1, chr-F: 0.460
testset: URL, BLEU: 11.7, chr-F: 0.384
testset: URL, BLEU: 19.4, chr-F: 0.463
testset: URL, BLEU: 12.7, chr-F: 0.394
testset: URL, BLEU: 16.7, chr-F: 0.455
testset: URL, BLEU: 22.7, chr-F: 0.499
testset: URL, BLEU: 13.3, chr-F: 0.408
testset: URL, BLEU: 23.6, chr-F: 0.506
testset: URL, BLEU: 11.8, chr-F: 0.379
testset: URL, BLEU: 15.6, chr-F: 0.446
testset: URL, BLEU: 23.6, chr-F: 0.506
testset: URL, BLEU: 12.9, chr-F: 0.399
testset: URL, BLEU: 25.3, chr-F: 0.519
testset: URL, BLEU: 11.6, chr-F: 0.376
testset: URL, BLEU: 12.4, chr-F: 0.410
testset: URL, BLEU: 17.8, chr-F: 0.448
testset: URL, BLEU: 14.8, chr-F: 0.434
testset: URL, BLEU: 17.9, chr-F: 0.446
testset: URL, BLEU: 12.5, chr-F: 0.391
testset: URL, BLEU: 15.9, chr-F: 0.449
testset: URL, BLEU: 24.0, chr-F: 0.518
testset: URL, BLEU: 24.3, chr-F: 0.522
testset: URL, BLEU: 13.9, chr-F: 0.411
testset: URL, BLEU: 19.0, chr-F: 0.475
testset: URL, BLEU: 19.2, chr-F: 0.468
testset: URL, BLEU: 23.9, chr-F: 0.521
testset: URL, BLEU: 5.9, chr-F: 0.268
testset: URL, BLEU: 8.8, chr-F: 0.348
testset: URL, BLEU: 19.1, chr-F: 0.475
testset: URL, BLEU: 17.9, chr-F: 0.450
testset: URL, BLEU: 12.1, chr-F: 0.392
testset: URL, BLEU: 21.1, chr-F: 0.480
testset: URL, BLEU: 18.7, chr-F: 0.475
testset: URL, BLEU: 15.4, chr-F: 0.431
testset: URL, BLEU: 18.1, chr-F: 0.454
testset: URL, BLEU: 18.6, chr-F: 0.465
testset: URL, BLEU: 13.3, chr-F: 0.403
testset: URL, BLEU: 24.0, chr-F: 0.508
testset: URL, BLEU: 21.4, chr-F: 0.494
testset: URL, BLEU: 16.8, chr-F: 0.457
testset: URL, BLEU: 24.9, chr-F: 0.522
testset: URL, BLEU: 13.7, chr-F: 0.417
testset: URL, BLEU: 17.3, chr-F: 0.453
testset: URL, BLEU: 16.7, chr-F: 0.444
testset: URL, BLEU: 10.9, chr-F: 0.375
testset: URL, BLEU: 21.5, chr-F: 0.484
testset: URL, BLEU: 17.5, chr-F: 0.464
testset: URL, BLEU: 9.1, chr-F: 0.388
testset: URL, BLEU: 11.5, chr-F: 0.404
testset: URL, BLEU: 14.8, chr-F: 0.432
testset: URL, BLEU: 19.3, chr-F: 0.467
testset: URL, BLEU: 17.1, chr-F: 0.450
testset: URL, BLEU: 10.9, chr-F: 0.380
testset: URL, BLEU: 26.0, chr-F: 0.518
testset: URL, BLEU: 24.3, chr-F: 0.514
testset: URL, BLEU: 12.5, chr-F: 0.417
testset: URL, BLEU: 16.4, chr-F: 0.443
testset: URL, BLEU: 13.9, chr-F: 0.432
testset: URL, BLEU: 11.7, chr-F: 0.383
testset: URL, BLEU: 22.2, chr-F: 0.483
testset: URL, BLEU: 20.1, chr-F: 0.496
testset: URL, BLEU: 12.3, chr-F: 0.389
testset: URL, BLEU: 22.0, chr-F: 0.497
testset: URL, BLEU: 3.1, chr-F: 0.208
testset: URL, BLEU: 7.8, chr-F: 0.369
testset: URL, BLEU: 14.6, chr-F: 0.408
testset: URL, BLEU: 16.4, chr-F: 0.483
testset: URL, BLEU: 6.1, chr-F: 0.288
testset: URL, BLEU: 16.9, chr-F: 0.456
testset: URL, BLEU: 20.2, chr-F: 0.468
testset: URL, BLEU: 16.0, chr-F: 0.152
testset: URL, BLEU: 10.2, chr-F: 0.333
testset: URL, BLEU: 32.6, chr-F: 0.651
testset: URL, BLEU: 34.5, chr-F: 0.556
testset: URL, BLEU: 48.1, chr-F: 0.638
testset: URL, BLEU: 10.2, chr-F: 0.416
testset: URL, BLEU: 41.9, chr-F: 0.612
testset: URL, BLEU: 0.0, chr-F: 0.112
testset: URL, BLEU: 0.3, chr-F: 0.068
testset: URL, BLEU: 12.2, chr-F: 0.419
testset: URL, BLEU: 48.7, chr-F: 0.637
testset: URL, BLEU: 8.4, chr-F: 0.407
testset: URL, BLEU: 19.0, chr-F: 0.357
testset: URL, BLEU: 0.0, chr-F: 0.238
testset: URL, BLEU: 1.4, chr-F: 0.080
testset: URL, BLEU: 45.7, chr-F: 0.643
testset: URL, BLEU: 55.3, chr-F: 0.687
testset: URL, BLEU: 39.3, chr-F: 0.563
testset: URL, BLEU: 33.9, chr-F: 0.586
testset: URL, BLEU: 22.6, chr-F: 0.475
testset: URL, BLEU: 32.1, chr-F: 0.525
testset: URL, BLEU: 44.1, chr-F: 0.611
testset: URL, BLEU: 71.6, chr-F: 0.814
testset: URL, BLEU: 31.0, chr-F: 0.481
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 0.0, chr-F: 0.133
testset: URL, BLEU: 5.5, chr-F: 0.129
testset: URL, BLEU: 22.2, chr-F: 0.345
testset: URL, BLEU: 6.3, chr-F: 0.251
testset: URL, BLEU: 7.9, chr-F: 0.255
testset: URL, BLEU: 0.8, chr-F: 0.133
testset: URL, BLEU: 16.0, chr-F: 0.086
testset: URL, BLEU: 6.0, chr-F: 0.185
testset: URL, BLEU: 0.6, chr-F: 0.000
testset: URL, BLEU: 16.0, chr-F: 0.102
testset: URL, BLEU: 13.2, chr-F: 0.301
testset: URL, BLEU: 7.6, chr-F: 0.062
testset: URL, BLEU: 0.2, chr-F: 0.025
testset: URL, BLEU: 6.6, chr-F: 0.198
testset: URL, BLEU: 5.5, chr-F: 0.121
testset: URL, BLEU: 11.4, chr-F: 0.498
testset: URL, BLEU: 2.4, chr-F: 0.103
testset: URL, BLEU: 8.1, chr-F: 0.249
testset: URL, BLEU: 16.4, chr-F: 0.195
testset: URL, BLEU: 1.1, chr-F: 0.117
testset: URL, BLEU: 28.2, chr-F: 0.394
testset: URL, BLEU: 39.8, chr-F: 0.445
testset: URL, BLEU: 52.3, chr-F: 0.608
testset: URL, BLEU: 8.6, chr-F: 0.261
testset: URL, BLEU: 19.2, chr-F: 0.629
testset: URL, BLEU: 18.2, chr-F: 0.369
testset: URL, BLEU: 4.3, chr-F: 0.145
testset: URL, BLEU: 4.5, chr-F: 0.366
testset: URL, BLEU: 12.1, chr-F: 0.310
testset: URL, BLEU: 8.1, chr-F: 0.050
testset: URL, BLEU: 30.1, chr-F: 0.463
testset: URL, BLEU: 27.6, chr-F: 0.441
testset: URL, BLEU: 29.4, chr-F: 0.501
testset: URL, BLEU: 2.6, chr-F: 0.030
testset: URL, BLEU: 10.0, chr-F: 0.280
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 35.9, chr-F: 0.682
testset: URL, BLEU: 41.7, chr-F: 0.601
testset: URL, BLEU: 2.4, chr-F: 0.201
testset: URL, BLEU: 53.7, chr-F: 0.808
testset: URL, BLEU: 27.6, chr-F: 0.483
testset: URL, BLEU: 32.6, chr-F: 0.449
testset: URL, BLEU: 29.1, chr-F: 0.506
testset: URL, BLEU: 29.5, chr-F: 0.522
testset: URL, BLEU: 31.8, chr-F: 0.512
testset: URL, BLEU: 30.9, chr-F: 0.527
testset: URL, BLEU: 39.3, chr-F: 0.608
testset: URL, BLEU: 32.8, chr-F: 0.540
testset: URL, BLEU: 12.7, chr-F: 0.178
testset: URL, BLEU: 4.5, chr-F: 0.185
testset: URL, BLEU: 3.7, chr-F: 0.251
testset: URL, BLEU: 19.3, chr-F: 0.531
testset: URL, BLEU: 1.0, chr-F: 0.147
testset: URL, BLEU: 27.1, chr-F: 0.481
testset: URL, BLEU: 37.0, chr-F: 0.494
testset: URL, BLEU: 34.8, chr-F: 0.565
testset: URL, BLEU: 21.7, chr-F: 0.401
testset: URL, BLEU: 42.3, chr-F: 0.643
testset: URL, BLEU: 28.2, chr-F: 0.534
testset: URL, BLEU: 41.6, chr-F: 0.643
testset: URL, BLEU: 2.9, chr-F: 0.254
testset: URL, BLEU: 34.6, chr-F: 0.408
testset: URL, BLEU: 26.5, chr-F: 0.430
testset: URL, BLEU: 21.6, chr-F: 0.466
testset: URL, BLEU: 26.8, chr-F: 0.424
testset: URL, BLEU: 28.9, chr-F: 0.473
testset: URL, BLEU: 21.0, chr-F: 0.384
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 2.2, chr-F: 0.178
testset: URL, BLEU: 7.7, chr-F: 0.296
testset: URL, BLEU: 13.6, chr-F: 0.309
testset: URL, BLEU: 8.6, chr-F: 0.251
testset: URL, BLEU: 12.2, chr-F: 0.272
testset: URL, BLEU: 0.9, chr-F: 0.081
testset: URL, BLEU: 3.0, chr-F: 0.217
testset: URL, BLEU: 1.4, chr-F: 0.158
testset: URL, BLEU: 14.1, chr-F: 0.582
testset: URL, BLEU: 52.8, chr-F: 0.725
testset: URL, BLEU: 66.9, chr-F: 0.951
testset: URL, BLEU: 31.2, chr-F: 0.530
testset: URL, BLEU: 29.1, chr-F: 0.497
testset: URL, BLEU: 36.5, chr-F: 0.547
testset: URL, BLEU: 5.3, chr-F: 0.299
testset: URL, BLEU: 8.9, chr-F: 0.511
testset: URL, BLEU: 36.1, chr-F: 0.558
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 24.5, chr-F: 0.479
testset: URL, BLEU: 8.1, chr-F: 0.302
testset: URL, BLEU: 13.4, chr-F: 0.337
testset: URL, BLEU: 38.2, chr-F: 0.811
testset: URL, BLEU: 15.0, chr-F: 0.431
testset: URL, BLEU: 31.8, chr-F: 0.505
testset: URL, BLEU: 66.9, chr-F: 0.951
testset: URL, BLEU: 24.4, chr-F: 0.461
testset: URL, BLEU: 29.2, chr-F: 0.484
testset: URL, BLEU: 42.7, chr-F: 0.776
testset: URL, BLEU: 28.7, chr-F: 0.522
testset: URL, BLEU: 32.1, chr-F: 0.520
testset: URL, BLEU: 66.9, chr-F: 0.611
testset: URL, BLEU: 34.3, chr-F: 0.567
testset: URL, BLEU: 13.7, chr-F: 0.163
testset: URL, BLEU: 31.0, chr-F: 0.523
testset: URL, BLEU: 17.0, chr-F: 0.423
testset: URL, BLEU: 39.4, chr-F: 0.582
testset: URL, BLEU: 5.3, chr-F: 0.370
testset: URL, BLEU: 16.0, chr-F: 0.301
testset: URL, BLEU: 41.0, chr-F: 0.606
testset: URL, BLEU: 39.8, chr-F: 0.626
testset: URL, BLEU: 35.9, chr-F: 0.555
testset: URL, BLEU: 23.0, chr-F: 0.456
testset: URL, BLEU: 38.9, chr-F: 0.618
testset: URL, BLEU: 16.0, chr-F: 0.311
testset: URL, BLEU: 28.8, chr-F: 0.507
testset: URL, BLEU: 55.2, chr-F: 0.731
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 30.8, chr-F: 0.512
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 17.0, chr-F: 0.426
testset: URL, BLEU: 3.3, chr-F: 0.165
testset: URL, BLEU: 23.3, chr-F: 0.466
testset: URL, BLEU: 0.7, chr-F: 0.126
testset: URL, BLEU: 45.2, chr-F: 0.690
testset: URL, BLEU: 3.4, chr-F: 0.072
testset: URL, BLEU: 12.7, chr-F: 0.706
testset: URL, BLEU: 32.2, chr-F: 0.526
testset: URL, BLEU: 24.4, chr-F: 0.422
testset: URL, BLEU: 33.8, chr-F: 0.529
testset: URL, BLEU: 1.7, chr-F: 0.157
testset: URL, BLEU: 3.7, chr-F: 0.252
testset: URL, BLEU: 20.1, chr-F: 0.229
testset: URL, BLEU: 36.9, chr-F: 0.564
testset: URL, BLEU: 7.7, chr-F: 0.338
testset: URL, BLEU: 0.6, chr-F: 0.011
testset: URL, BLEU: 39.7, chr-F: 0.580
testset: URL, BLEU: 7.0, chr-F: 0.230
testset: URL, BLEU: 28.2, chr-F: 0.516
testset: URL, BLEU: 1.7, chr-F: 0.303
testset: URL, BLEU: 6.5, chr-F: 0.304
testset: URL, BLEU: 6.6, chr-F: 0.202
testset: URL, BLEU: 31.4, chr-F: 0.586
testset: URL, BLEU: 6.4, chr-F: 0.312
testset: URL, BLEU: 19.9, chr-F: 0.468
testset: URL, BLEU: 35.1, chr-F: 0.535
testset: URL, BLEU: 41.7, chr-F: 0.610
testset: URL, BLEU: 30.5, chr-F: 0.530
testset: URL, BLEU: 33.0, chr-F: 0.533
testset: URL, BLEU: 9.9, chr-F: 0.406
testset: URL, BLEU: 36.9, chr-F: 0.564
testset: URL, BLEU: 4.1, chr-F: 0.236
testset: URL, BLEU: 33.3, chr-F: 0.531
testset: URL, BLEU: 51.4, chr-F: 0.586
testset: URL, BLEU: 4.8, chr-F: 0.118
testset: URL, BLEU: 34.6, chr-F: 0.522
testset: URL, BLEU: 2.1, chr-F: 0.252
testset: URL, BLEU: 8.9, chr-F: 0.233
testset: URL, BLEU: 6.7, chr-F: 0.205
testset: URL, BLEU: 4.8, chr-F: 0.211
testset: URL, BLEU: 3.4, chr-F: 0.182
testset: URL, BLEU: 4.4, chr-F: 0.193
testset: URL, BLEU: 5.0, chr-F: 0.221
testset: URL, BLEU: 6.6, chr-F: 0.211
testset: URL, BLEU: 9.3, chr-F: 0.221
testset: URL, BLEU: 19.6, chr-F: 0.282
testset: URL, BLEU: 2.9, chr-F: 0.171
testset: URL, BLEU: 4.3, chr-F: 0.187
testset: URL, BLEU: 2.4, chr-F: 0.154
testset: URL, BLEU: 3.6, chr-F: 0.187
testset: URL, BLEU: 0.0, chr-F: 0.877
testset: URL, BLEU: 39.2, chr-F: 0.473
testset: URL, BLEU: 19.0, chr-F: 0.352
testset: URL, BLEU: 1.6, chr-F: 0.066
testset: URL, BLEU: 17.5, chr-F: 0.336
testset: URL, BLEU: 14.0, chr-F: 0.347
testset: URL, BLEU: 3.8, chr-F: 0.278
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 0.0, chr-F: 0.014
testset: URL, BLEU: 32.6, chr-F: 0.507
testset: URL, BLEU: 33.1, chr-F: 0.496
testset: URL, BLEU: 27.0, chr-F: 0.447
testset: URL, BLEU: 5.7, chr-F: 0.223
testset: URL, BLEU: 13.1, chr-F: 0.380
testset: URL, BLEU: 5.3, chr-F: 0.186
testset: URL, BLEU: 28.3, chr-F: 0.498
testset: URL, BLEU: 3.7, chr-F: 0.185
testset: URL, BLEU: 8.0, chr-F: 0.067
testset: URL, BLEU: 37.5, chr-F: 0.603
testset: URL, BLEU: 37.8, chr-F: 0.488
testset: URL, BLEU: 32.1, chr-F: 0.480
testset: URL, BLEU: 31.6, chr-F: 0.523
testset: URL, BLEU: 4.8, chr-F: 0.072
testset: URL, BLEU: 40.5, chr-F: 0.774
testset: URL, BLEU: 1.2, chr-F: 0.066
testset: URL, BLEU: 13.1, chr-F: 0.156
testset: URL, BLEU: 27.2, chr-F: 0.746
testset: URL, BLEU: 35.4, chr-F: 0.529
testset: URL, BLEU: 19.0, chr-F: 0.349
testset: URL, BLEU: 35.8, chr-F: 0.582
testset: URL, BLEU: 19.0, chr-F: 0.337
testset: URL, BLEU: 43.4, chr-F: 0.609
testset: URL, BLEU: 18.1, chr-F: 0.515
testset: URL, BLEU: 9.7, chr-F: 0.162
testset: URL, BLEU: 14.1, chr-F: 0.410
testset: URL, BLEU: 47.0, chr-F: 0.640
testset: URL, BLEU: 2.6, chr-F: 0.195
testset: URL, BLEU: 12.2, chr-F: 0.344
testset: URL, BLEU: 36.3, chr-F: 0.589
testset: URL, BLEU: 3.5, chr-F: 0.270
testset: URL, BLEU: 0.4, chr-F: 0.096
testset: URL, BLEU: 3.9, chr-F: 0.376
testset: URL, BLEU: 68.7, chr-F: 0.786
testset: URL, BLEU: 71.4, chr-F: 0.554
testset: URL, BLEU: 3.7, chr-F: 0.220
testset: URL, BLEU: 4.9, chr-F: 0.219
testset: URL, BLEU: 47.2, chr-F: 0.650
testset: URL, BLEU: 58.8, chr-F: 0.749
testset: URL, BLEU: 27.1, chr-F: 0.527
testset: URL, BLEU: 41.5, chr-F: 0.616
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 30.8, chr-F: 0.518
testset: URL, BLEU: 36.6, chr-F: 0.578
testset: URL, BLEU: 53.8, chr-F: 0.696
testset: URL, BLEU: 4.8, chr-F: 0.184
testset: URL, BLEU: 15.9, chr-F: 0.489
testset: URL, BLEU: 21.7, chr-F: 0.544
testset: URL, BLEU: 13.0, chr-F: 0.252
testset: URL, BLEU: 37.5, chr-F: 0.566
testset: URL, BLEU: 0.6, chr-F: 0.131
testset: URL, BLEU: 20.0, chr-F: 0.580
testset: URL, BLEU: 16.5, chr-F: 0.389
testset: URL, BLEU: 19.6, chr-F: 0.450
testset: URL, BLEU: 34.5, chr-F: 0.319
testset: URL, BLEU: 3.2, chr-F: 0.196
testset: URL, BLEU: 32.6, chr-F: 0.517
testset: URL, BLEU: 28.4, chr-F: 0.503
testset: URL, BLEU: 24.3, chr-F: 0.465
testset: URL, BLEU: 0.2, chr-F: 0.043
testset: URL, BLEU: 2.4, chr-F: 0.020
testset: URL, BLEU: 4.4, chr-F: 0.178
testset: URL, BLEU: 11.3, chr-F: 0.378
testset: URL, BLEU: 37.8, chr-F: 0.579
testset: URL, BLEU: 0.1, chr-F: 0.082
testset: URL, BLEU: 3.3, chr-F: 0.050
testset: URL, BLEU: 27.1, chr-F: 0.485
testset: URL, BLEU: 34.7, chr-F: 0.539
testset: URL, BLEU: 6.7, chr-F: 0.331
testset: URL, BLEU: 4.5, chr-F: 0.235
testset: URL, BLEU: 31.9, chr-F: 0.527
testset: URL, BLEU: 0.2, chr-F: 0.101
testset: URL, BLEU: 13.7, chr-F: 0.358
testset: URL, BLEU: 7.2, chr-F: 0.304
testset: URL, BLEU: 8.9, chr-F: 0.349
testset: URL, BLEU: 28.9, chr-F: 0.513
testset: URL, BLEU: 0.7, chr-F: 0.157
testset: URL, BLEU: 0.2, chr-F: 0.010
testset: URL, BLEU: 0.1, chr-F: 0.005
testset: URL, BLEU: 0.2, chr-F: 0.073
testset: URL, BLEU: 23.2, chr-F: 0.470
testset: URL, BLEU: 12.5, chr-F: 0.367
testset: URL, BLEU: 5.4, chr-F: 0.249
testset: URL, BLEU: 12.9, chr-F: 0.263
testset: URL, BLEU: 16.5, chr-F: 0.395
testset: URL, BLEU: 29.2, chr-F: 0.536
testset: URL, BLEU: 0.6, chr-F: 0.092
testset: URL, BLEU: 11.2, chr-F: 0.183
testset: URL, BLEU: 0.3, chr-F: 0.112
testset: URL, BLEU: 6.4, chr-F: 0.301
testset: URL, BLEU: 29.6, chr-F: 0.502
testset: URL, BLEU: 17.4, chr-F: 0.445
testset: URL, BLEU: 18.5, chr-F: 0.380
testset: URL, BLEU: 7.9, chr-F: 0.245
testset: URL, BLEU: 21.9, chr-F: 0.449
testset: URL, BLEU: 21.9, chr-F: 0.478
testset: URL, BLEU: 13.6, chr-F: 0.391
testset: URL, BLEU: 37.2, chr-F: 0.574
testset: URL, BLEU: 34.5, chr-F: 0.562
testset: URL, BLEU: 4.7, chr-F: 0.261
testset: URL, BLEU: 0.2, chr-F: 0.006
testset: URL, BLEU: 0.6, chr-F: 0.064
testset: URL, BLEU: 0.2, chr-F: 0.064
testset: URL, BLEU: 23.6, chr-F: 0.477
testset: URL, BLEU: 25.1, chr-F: 0.480
testset: URL, BLEU: 0.2, chr-F: 0.070
testset: URL, BLEU: 0.2, chr-F: 0.059
testset: URL, BLEU: 5.2, chr-F: 0.179
testset: URL, BLEU: 25.7, chr-F: 0.484
testset: URL, BLEU: 27.1, chr-F: 0.494
testset: URL, BLEU: 1.6, chr-F: 0.076
testset: URL, BLEU: 10.8, chr-F: 0.281
testset: URL, BLEU: 8.1, chr-F: 0.251
testset: URL, BLEU: 31.5, chr-F: 0.534
testset: URL, BLEU: 0.6, chr-F: 0.144
testset: URL, BLEU: 39.1, chr-F: 0.572
testset: URL, BLEU: 0.1, chr-F: 0.088
testset: URL, BLEU: 13.1, chr-F: 0.406
testset: URL, BLEU: 27.2, chr-F: 0.489
testset: URL, BLEU: 13.4, chr-F: 0.350
testset: URL, BLEU: 6.0, chr-F: 0.262
testset: URL, BLEU: 14.1, chr-F: 0.366
testset: URL, BLEU: 19.0, chr-F: 0.424
testset: URL, BLEU: 15.4, chr-F: 0.342
testset: URL, BLEU: 15.2, chr-F: 0.315
testset: URL, BLEU: 35.4, chr-F: 0.394
testset: URL, BLEU: 12.6, chr-F: 0.401
testset: URL, BLEU: 2.9, chr-F: 0.168
testset: URL, BLEU: 5.2, chr-F: 0.207
testset: URL, BLEU: 6.4, chr-F: 0.215
testset: URL, BLEU: 1.6, chr-F: 0.180
testset: URL, BLEU: 3.9, chr-F: 0.199
testset: URL, BLEU: 26.6, chr-F: 0.483
testset: URL, BLEU: 20.2, chr-F: 0.398
testset: URL, BLEU: 12.1, chr-F: 0.380
testset: URL, BLEU: 0.7, chr-F: 0.039
testset: URL, BLEU: 53.7, chr-F: 0.513
testset: URL, BLEU: 30.5, chr-F: 0.503
testset: URL, BLEU: 43.1, chr-F: 0.589
testset: URL, BLEU: 12.7, chr-F: 0.541
testset: URL, BLEU: 5.3, chr-F: 0.210
testset: URL, BLEU: 39.5, chr-F: 0.563
testset: URL, BLEU: 11.6, chr-F: 0.343
testset: URL, BLEU: 30.9, chr-F: 0.524
testset: URL, BLEU: 57.6, chr-F: 0.572
testset: URL, BLEU: 4.9, chr-F: 0.244
testset: URL, BLEU: 38.0, chr-F: 0.562
testset: URL, BLEU: 40.8, chr-F: 0.615
testset: URL, BLEU: 72.6, chr-F: 0.846
testset: URL, BLEU: 26.8, chr-F: 0.514
testset: URL, BLEU: 27.1, chr-F: 0.493
testset: URL, BLEU: 30.8, chr-F: 0.512
testset: URL, BLEU: 30.8, chr-F: 0.475
testset: URL, BLEU: 36.0, chr-F: 0.521
testset: URL, BLEU: 12.6, chr-F: 0.364
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 46.1, chr-F: 0.633
testset: URL, BLEU: 5.1, chr-F: 0.136
testset: URL, BLEU: 5.1, chr-F: 0.199
testset: URL, BLEU: 0.8, chr-F: 0.208
testset: URL, BLEU: 16.8, chr-F: 0.380
testset: URL, BLEU: 0.2, chr-F: 0.002
testset: URL, BLEU: 16.6, chr-F: 0.415
testset: URL, BLEU: 7.0, chr-F: 0.321
testset: URL, BLEU: 0.2, chr-F: 0.003
testset: URL, BLEU: 6.6, chr-F: 0.251
testset: URL, BLEU: 31.5, chr-F: 0.513
testset: URL, BLEU: 33.5, chr-F: 0.550
testset: URL, BLEU: 25.6, chr-F: 0.466
testset: URL, BLEU: 0.1, chr-F: 0.035
testset: URL, BLEU: 0.8, chr-F: 0.135
testset: URL, BLEU: 1.4, chr-F: 0.194
testset: URL, BLEU: 18.8, chr-F: 0.422
testset: URL, BLEU: 41.2, chr-F: 0.591
testset: URL, BLEU: 27.9, chr-F: 0.503
testset: URL, BLEU: 0.7, chr-F: 0.125
testset: URL, BLEU: 0.1, chr-F: 0.062
testset: URL, BLEU: 30.7, chr-F: 0.540
testset: URL, BLEU: 4.9, chr-F: 0.283
testset: URL, BLEU: 3.9, chr-F: 0.217
testset: URL, BLEU: 5.9, chr-F: 0.276
testset: URL, BLEU: 4.8, chr-F: 0.239
testset: URL, BLEU: 34.6, chr-F: 0.551
testset: URL, BLEU: 0.2, chr-F: 0.099
testset: URL, BLEU: 5.5, chr-F: 0.040
testset: URL, BLEU: 13.1, chr-F: 0.357
testset: URL, BLEU: 0.4, chr-F: 0.085
testset: URL, BLEU: 7.4, chr-F: 0.293
testset: URL, BLEU: 20.0, chr-F: 0.415
testset: URL, BLEU: 29.9, chr-F: 0.528
testset: URL, BLEU: 5.9, chr-F: 0.220
testset: URL, BLEU: 0.5, chr-F: 0.137
testset: URL, BLEU: 0.1, chr-F: 0.009
testset: URL, BLEU: 0.0, chr-F: 0.005
testset: URL, BLEU: 0.5, chr-F: 0.103
testset: URL, BLEU: 6.4, chr-F: 0.241
testset: URL, BLEU: 28.2, chr-F: 0.460
testset: URL, BLEU: 26.0, chr-F: 0.485
testset: URL, BLEU: 0.8, chr-F: 0.228
testset: URL, BLEU: 11.2, chr-F: 0.364
testset: URL, BLEU: 10.6, chr-F: 0.277
testset: URL, BLEU: 10.9, chr-F: 0.307
testset: URL, BLEU: 13.8, chr-F: 0.368
testset: URL, BLEU: 33.8, chr-F: 0.571
testset: URL, BLEU: 3.0, chr-F: 0.007
testset: URL, BLEU: 4.8, chr-F: 0.005
testset: URL, BLEU: 0.4, chr-F: 0.092
testset: URL, BLEU: 9.0, chr-F: 0.174
testset: URL, BLEU: 0.5, chr-F: 0.144
testset: URL, BLEU: 0.1, chr-F: 0.000
testset: URL, BLEU: 7.7, chr-F: 0.333
testset: URL, BLEU: 25.1, chr-F: 0.480
testset: URL, BLEU: 0.4, chr-F: 0.101
testset: URL, BLEU: 21.0, chr-F: 0.492
testset: URL, BLEU: 0.5, chr-F: 0.143
testset: URL, BLEU: 0.5, chr-F: 0.135
testset: URL, BLEU: 15.6, chr-F: 0.345
testset: URL, BLEU: 9.3, chr-F: 0.251
testset: URL, BLEU: 9.5, chr-F: 0.326
testset: URL, BLEU: 54.1, chr-F: 0.747
testset: URL, BLEU: 29.8, chr-F: 0.503
testset: URL, BLEU: 20.0, chr-F: 0.449
testset: URL, BLEU: 9.3, chr-F: 0.231
testset: URL, BLEU: 12.2, chr-F: 0.357
testset: URL, BLEU: 0.2, chr-F: 0.003
testset: URL, BLEU: 37.1, chr-F: 0.570
testset: URL, BLEU: 0.5, chr-F: 0.078
testset: URL, BLEU: 38.4, chr-F: 0.575
testset: URL, BLEU: 4.8, chr-F: 0.249
testset: URL, BLEU: 2.8, chr-F: 0.185
testset: URL, BLEU: 0.1, chr-F: 0.011
testset: URL, BLEU: 2.6, chr-F: 0.166
testset: URL, BLEU: 2.6, chr-F: 0.214
testset: URL, BLEU: 39.8, chr-F: 0.566
testset: URL, BLEU: 1.0, chr-F: 0.131
testset: URL, BLEU: 0.9, chr-F: 0.124
testset: URL, BLEU: 26.2, chr-F: 0.500
testset: URL, BLEU: 31.5, chr-F: 0.545
testset: URL, BLEU: 0.2, chr-F: 0.088
testset: URL, BLEU: 0.4, chr-F: 0.108
testset: URL, BLEU: 1.8, chr-F: 0.192
testset: URL, BLEU: 7.6, chr-F: 0.313
testset: URL, BLEU: 27.6, chr-F: 0.508
testset: URL, BLEU: 0.1, chr-F: 0.011
testset: URL, BLEU: 28.6, chr-F: 0.496
testset: URL, BLEU: 2.0, chr-F: 0.098
testset: URL, BLEU: 0.9, chr-F: 0.080
testset: URL, BLEU: 24.5, chr-F: 0.501
testset: URL, BLEU: 1.3, chr-F: 0.105
testset: URL, BLEU: 3.0, chr-F: 0.178
testset: URL, BLEU: 12.5, chr-F: 0.298
testset: URL, BLEU: 1.7, chr-F: 0.214
testset: URL, BLEU: 36.3, chr-F: 0.575
testset: URL, BLEU: 22.1, chr-F: 0.459
testset: URL, BLEU: 5.2, chr-F: 0.316
testset: URL, BLEU: 42.4, chr-F: 0.591
testset: URL, BLEU: 0.6, chr-F: 0.145
testset: URL, BLEU: 1.9, chr-F: 0.255
testset: URL, BLEU: 0.3, chr-F: 0.054
testset: URL, BLEU: 27.3, chr-F: 0.478
testset: URL, BLEU: 7.0, chr-F: 0.310
testset: URL, BLEU: 0.9, chr-F: 0.116
testset: URL, BLEU: 4.0, chr-F: 0.164
testset: URL, BLEU: 5.9, chr-F: 0.260
testset: URL, BLEU: 0.4, chr-F: 0.071
testset: URL, BLEU: 20.1, chr-F: 0.420
testset: URL, BLEU: 0.6, chr-F: 0.057
testset: URL, BLEU: 22.8, chr-F: 0.278
testset: URL, BLEU: 9.0, chr-F: 0.360
testset: URL, BLEU: 19.0, chr-F: 0.324
testset: URL, BLEU: 35.8, chr-F: 0.523
testset: URL, BLEU: 35.7, chr-F: 0.495
testset: URL, BLEU: 42.7, chr-F: 0.644
testset: URL, BLEU: 22.4, chr-F: 0.477
testset: URL, BLEU: 4.3, chr-F: 0.141
testset: URL, BLEU: 9.0, chr-F: 0.345
testset: URL, BLEU: 16.0, chr-F: 0.289
testset: URL, BLEU: 4.1, chr-F: 0.143
testset: URL, BLEU: 3.0, chr-F: 0.247
testset: URL, BLEU: 11.6, chr-F: 0.294
testset: URL, BLEU: 19.0, chr-F: 0.220
testset: URL, BLEU: 4.8, chr-F: 0.188
testset: URL, BLEU: 6.1, chr-F: 0.136
testset: URL, BLEU: 16.0, chr-F: 0.054
testset: URL, BLEU: 0.7, chr-F: 0.124
testset: URL, BLEU: 5.4, chr-F: 0.238
testset: URL, BLEU: 10.5, chr-F: 0.155
testset: URL, BLEU: 18.6, chr-F: 0.427
testset: URL, BLEU: 38.9, chr-F: 0.611
testset: URL, BLEU: 6.8, chr-F: 0.276
testset: URL, BLEU: 10.5, chr-F: 0.138
testset: URL, BLEU: 12.7, chr-F: 0.088
testset: URL, BLEU: 7.6, chr-F: 0.109
testset: URL, BLEU: 18.8, chr-F: 0.254
testset: URL, BLEU: 21.4, chr-F: 0.339
testset: URL, BLEU: 4.0, chr-F: 0.440
testset: URL, BLEU: 5.3, chr-F: 0.231
testset: URL, BLEU: 24.9, chr-F: 0.420
testset: URL, BLEU: 0.0, chr-F: 0.056
testset: URL, BLEU: 16.0, chr-F: 0.171
testset: URL, BLEU: 2.1, chr-F: 0.258
testset: URL, BLEU: 43.5, chr-F: 0.557
testset: URL, BLEU: 21.3, chr-F: 0.402
testset: URL, BLEU: 3.0, chr-F: 0.164
testset: URL, BLEU: 12.7, chr-F: 0.142
testset: URL, BLEU: 10.5, chr-F: 0.131
testset: URL, BLEU: 0.6, chr-F: 0.087
testset: URL, BLEU: 26.2, chr-F: 0.443
testset: URL, BLEU: 3.6, chr-F: 0.176
testset: URL, BLEU: 0.0, chr-F: 0.632
testset: URL, BLEU: 5.8, chr-F: 0.163
testset: URL, BLEU: 14.5, chr-F: 0.104
testset: URL, BLEU: 53.7, chr-F: 0.504
testset: URL, BLEU: 8.5, chr-F: 0.311
testset: URL, BLEU: 8.7, chr-F: 0.259
testset: URL, BLEU: 10.3, chr-F: 0.303
testset: URL, BLEU: 1.3, chr-F: 0.006
testset: URL, BLEU: 8.6, chr-F: 0.331
testset: URL, BLEU: 7.2, chr-F: 0.301
testset: URL, BLEU: 0.4, chr-F: 0.074
testset: URL, BLEU: 14.4, chr-F: 0.256
testset: URL, BLEU: 9.8, chr-F: 0.325
testset: URL, BLEU: 6.6, chr-F: 0.127
testset: URL, BLEU: 50.0, chr-F: 0.657
testset: URL, BLEU: 4.5, chr-F: 0.223
testset: URL, BLEU: 8.6, chr-F: 0.316
testset: URL, BLEU: 19.1, chr-F: 0.445
testset: URL, BLEU: 9.8, chr-F: 0.313
testset: URL, BLEU: 9.1, chr-F: 0.318
testset: URL, BLEU: 4.8, chr-F: 0.213
testset: URL, BLEU: 2.0, chr-F: 0.138
testset: URL, BLEU: 49.7, chr-F: 0.630
testset: URL, BLEU: 1.0, chr-F: 0.105
testset: URL, BLEU: 0.0, chr-F: 0.011
testset: URL, BLEU: 4.1, chr-F: 0.194
testset: URL, BLEU: 23.0, chr-F: 0.410
testset: URL, BLEU: 22.2, chr-F: 0.448
testset: URL, BLEU: 6.4, chr-F: 0.341
testset: URL, BLEU: 1.2, chr-F: 0.035
testset: URL, BLEU: 3.4, chr-F: 0.204
testset: URL, BLEU: 31.2, chr-F: 0.528
testset: URL, BLEU: 33.9, chr-F: 0.570
testset: URL, BLEU: 26.9, chr-F: 0.490
testset: URL, BLEU: 0.2, chr-F: 0.039
testset: URL, BLEU: 0.3, chr-F: 0.061
testset: URL, BLEU: 17.3, chr-F: 0.455
testset: URL, BLEU: 47.1, chr-F: 0.634
testset: URL, BLEU: 31.1, chr-F: 0.530
testset: URL, BLEU: 0.7, chr-F: 0.061
testset: URL, BLEU: 32.4, chr-F: 0.544
testset: URL, BLEU: 40.1, chr-F: 0.583
testset: URL, BLEU: 5.1, chr-F: 0.207
testset: URL, BLEU: 1.8, chr-F: 0.304
testset: URL, BLEU: 5.6, chr-F: 0.233
testset: URL, BLEU: 0.3, chr-F: 0.149
testset: URL, BLEU: 6.4, chr-F: 0.412
testset: URL, BLEU: 11.4, chr-F: 0.357
testset: URL, BLEU: 0.1, chr-F: 0.067
testset: URL, BLEU: 9.1, chr-F: 0.316
testset: URL, BLEU: 16.8, chr-F: 0.416
testset: URL, BLEU: 34.5, chr-F: 0.562
testset: URL, BLEU: 5.5, chr-F: 0.204
testset: URL, BLEU: 0.2, chr-F: 0.001
testset: URL, BLEU: 0.1, chr-F: 0.006
testset: URL, BLEU: 20.8, chr-F: 0.424
testset: URL, BLEU: 28.9, chr-F: 0.511
testset: URL, BLEU: 5.1, chr-F: 0.336
testset: URL, BLEU: 11.5, chr-F: 0.401
testset: URL, BLEU: 17.2, chr-F: 0.362
testset: URL, BLEU: 37.7, chr-F: 0.606
testset: URL, BLEU: 2.8, chr-F: 0.148
testset: URL, BLEU: 14.3, chr-F: 0.188
testset: URL, BLEU: 0.4, chr-F: 0.129
testset: URL, BLEU: 2.8, chr-F: 0.258
testset: URL, BLEU: 30.3, chr-F: 0.490
testset: URL, BLEU: 0.3, chr-F: 0.099
testset: URL, BLEU: 18.3, chr-F: 0.461
testset: URL, BLEU: 0.6, chr-F: 0.185
testset: URL, BLEU: 1.2, chr-F: 0.163
testset: URL, BLEU: 15.3, chr-F: 0.385
testset: URL, BLEU: 45.7, chr-F: 0.393
testset: URL, BLEU: 29.5, chr-F: 0.498
testset: URL, BLEU: 19.4, chr-F: 0.456
testset: URL, BLEU: 12.9, chr-F: 0.356
testset: URL, BLEU: 33.0, chr-F: 0.532
testset: URL, BLEU: 1.2, chr-F: 0.072
testset: URL, BLEU: 35.1, chr-F: 0.553
testset: URL, BLEU: 6.8, chr-F: 0.313
testset: URL, BLEU: 0.2, chr-F: 0.004
testset: URL, BLEU: 3.6, chr-F: 0.112
testset: URL, BLEU: 78.3, chr-F: 0.917
testset: URL, BLEU: 0.1, chr-F: 0.084
testset: URL, BLEU: 0.3, chr-F: 0.117
testset: URL, BLEU: 22.4, chr-F: 0.468
testset: URL, BLEU: 33.0, chr-F: 0.559
testset: URL, BLEU: 0.6, chr-F: 0.084
testset: URL, BLEU: 5.9, chr-F: 0.278
testset: URL, BLEU: 4.2, chr-F: 0.257
testset: URL, BLEU: 29.7, chr-F: 0.531
testset: URL, BLEU: 28.8, chr-F: 0.498
testset: URL, BLEU: 0.4, chr-F: 0.056
testset: URL, BLEU: 1.7, chr-F: 0.222
testset: URL, BLEU: 2.4, chr-F: 0.207
testset: URL, BLEU: 38.6, chr-F: 0.598
testset: URL, BLEU: 23.9, chr-F: 0.455
testset: URL, BLEU: 1.2, chr-F: 0.159
testset: URL, BLEU: 44.2, chr-F: 0.609
testset: URL, BLEU: 2.4, chr-F: 0.123
testset: URL, BLEU: 2.8, chr-F: 0.244
testset: URL, BLEU: 0.5, chr-F: 0.034
testset: URL, BLEU: 26.7, chr-F: 0.474
testset: URL, BLEU: 2.3, chr-F: 0.333
testset: URL, BLEU: 0.6, chr-F: 0.088
testset: URL, BLEU: 5.3, chr-F: 0.178
testset: URL, BLEU: 8.7, chr-F: 0.271
testset: URL, BLEU: 19.2, chr-F: 0.394
testset: URL, BLEU: 12.3, chr-F: 0.482
testset: URL, BLEU: 8.3, chr-F: 0.286
testset: URL, BLEU: 6.1, chr-F: 0.181
testset: URL, BLEU: 12.7, chr-F: 0.535
testset: URL, BLEU: 4.1, chr-F: 0.144
testset: URL, BLEU: 0.5, chr-F: 0.033
testset: URL, BLEU: 12.4, chr-F: 0.127
testset: URL, BLEU: 6.9, chr-F: 0.233
testset: URL, BLEU: 0.5, chr-F: 0.045
testset: URL, BLEU: 0.0, chr-F: 0.244
testset: URL, BLEU: 4.2, chr-F: 0.280
testset: URL, BLEU: 21.7, chr-F: 0.448
testset: URL, BLEU: 22.9, chr-F: 0.431
testset: URL, BLEU: 10.7, chr-F: 0.140
testset: URL, BLEU: 31.8, chr-F: 0.455
testset: URL, BLEU: 0.5, chr-F: 0.040
testset: URL, BLEU: 0.7, chr-F: 0.204
testset: URL, BLEU: 34.8, chr-F: 0.528
testset: URL, BLEU: 8.1, chr-F: 0.318
testset: URL, BLEU: 21.4, chr-F: 0.324
testset: URL, BLEU: 0.1, chr-F: 0.000
testset: URL, BLEU: 6.6, chr-F: 0.127
testset: URL, BLEU: 35.7, chr-F: 0.576
testset: URL, BLEU: 32.6, chr-F: 0.511
testset: URL, BLEU: 17.7, chr-F: 0.342
testset: URL, BLEU: 12.1, chr-F: 0.304
testset: URL, BLEU: 31.7, chr-F: 0.438
testset: URL, BLEU: 30.6, chr-F: 0.479
testset: URL, BLEU: 0.5, chr-F: 0.156
testset: URL, BLEU: 27.5, chr-F: 0.247
testset: URL, BLEU: 16.1, chr-F: 0.330
testset: URL, BLEU: 4.0, chr-F: 0.167
testset: URL, BLEU: 13.2, chr-F: 0.257
testset: URL, BLEU: 6.0, chr-F: 0.241
testset: URL, BLEU: 0.0, chr-F: 0.170
testset: URL, BLEU: 0.0, chr-F: 0.427
testset: URL, BLEU: 0.0, chr-F: 1.000
testset: URL, BLEU: 31.8, chr-F: 0.374
testset: URL, BLEU: 11.5, chr-F: 0.416
testset: URL, BLEU: 15.1, chr-F: 0.348
testset: URL, BLEU: 17.5, chr-F: 0.329
testset: URL, BLEU: 13.1, chr-F: 0.346
testset: URL, BLEU: 12.1, chr-F: 0.306
testset: URL, BLEU: 8.0, chr-F: 0.035
testset: URL, BLEU: 20.8, chr-F: 0.299
testset: URL, BLEU: 13.7, chr-F: 0.355
testset: URL, BLEU: 24.7, chr-F: 0.423
testset: URL, BLEU: 12.7, chr-F: 0.322
testset: URL, BLEU: 7.8, chr-F: 0.288
testset: URL, BLEU: 13.5, chr-F: 0.390
testset: URL, BLEU: 32.0, chr-F: 0.490
testset: URL, BLEU: 5.0, chr-F: 0.135
testset: URL, BLEU: 18.0, chr-F: 0.403
testset: URL, BLEU: 16.9, chr-F: 0.377
testset: URL, BLEU: 0.0, chr-F: 0.077
testset: URL, BLEU: 2.4, chr-F: 0.328
testset: URL, BLEU: 0.0, chr-F: 0.673
testset: URL, BLEU: 2.5, chr-F: 0.139
testset: URL, BLEU: 24.5, chr-F: 0.458
testset: URL, BLEU: 13.3, chr-F: 0.324
testset: URL, BLEU: 30.4, chr-F: 0.539
testset: URL, BLEU: 30.2, chr-F: 0.448
testset: URL, BLEU: 37.9, chr-F: 0.571
testset: URL, BLEU: 45.8, chr-F: 0.627
testset: URL, BLEU: 31.1, chr-F: 0.561
testset: URL, BLEU: 36.2, chr-F: 0.573
testset: URL, BLEU: 22.7, chr-F: 0.524
testset: URL, BLEU: 47.4, chr-F: 0.674
testset: URL, BLEU: 28.4, chr-F: 0.465
testset: URL, BLEU: 53.2, chr-F: 0.704
testset: URL, BLEU: 1.4, chr-F: 0.140
testset: URL, BLEU: 3.2, chr-F: 0.104
testset: URL, BLEU: 9.9, chr-F: 0.243
testset: URL, BLEU: 6.2, chr-F: 0.269
testset: URL, BLEU: 0.0, chr-F: 0.056
testset: URL, BLEU: 6.6, chr-F: 0.107
testset: URL, BLEU: 12.0, chr-F: 0.356
testset: URL, BLEU: 15.7, chr-F: 0.384
testset: URL, BLEU: 14.8, chr-F: 0.320
testset: URL, BLEU: 4.1, chr-F: 0.292
testset: URL, BLEU: 19.0, chr-F: 0.111
testset: URL, BLEU: 8.4, chr-F: 0.321
testset: URL, BLEU: 0.9, chr-F: 0.064
testset: URL, BLEU: 13.5, chr-F: 0.361
testset: URL, BLEU: 8.2, chr-F: 0.228
testset: URL, BLEU: 31.9, chr-F: 0.610
testset: URL, BLEU: 0.0, chr-F: 0.050
testset: URL, BLEU: 0.5, chr-F: 0.010
testset: URL, BLEU: 4.5, chr-F: 0.206
testset: URL, BLEU: 4.2, chr-F: 0.220
testset: URL, BLEU: 3.9, chr-F: 0.202
testset: URL, BLEU: 16.8, chr-F: 0.389
testset: URL, BLEU: 5.2, chr-F: 0.298
testset: URL, BLEU: 24.7, chr-F: 0.406
testset: URL, BLEU: 0.4, chr-F: 0.137
testset: URL, BLEU: 16.8, chr-F: 0.310
testset: URL, BLEU: 5.4, chr-F: 0.370
testset: URL, BLEU: 4.3, chr-F: 0.170
testset: URL, BLEU: 0.6, chr-F: 0.044
testset: URL, BLEU: 0.1, chr-F: 0.050
testset: URL, BLEU: 0.2, chr-F: 0.064
testset: URL, BLEU: 3.1, chr-F: 0.013
testset: URL, BLEU: 0.2, chr-F: 0.050
testset: URL, BLEU: 2.7, chr-F: 0.155
testset: URL, BLEU: 4.7, chr-F: 0.198
testset: URL, BLEU: 1.9, chr-F: 0.146
testset: URL, BLEU: 12.8, chr-F: 0.234
testset: URL, BLEU: 0.5, chr-F: 0.114
testset: URL, BLEU: 0.8, chr-F: 0.163
testset: URL, BLEU: 2.4, chr-F: 0.141
testset: URL, BLEU: 12.6, chr-F: 0.393
testset: URL, BLEU: 15.9, chr-F: 0.322
testset: URL, BLEU: 19.0, chr-F: 0.308
testset: URL, BLEU: 15.9, chr-F: 0.301
testset: URL, BLEU: 14.7, chr-F: 0.250
testset: URL, BLEU: 38.5, chr-F: 0.522
testset: URL, BLEU: 17.6, chr-F: 0.424
testset: URL, BLEU: 32.0, chr-F: 0.472
testset: URL, BLEU: 31.2, chr-F: 0.496
testset: URL, BLEU: 40.1, chr-F: 0.579
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 27.8, chr-F: 0.543
testset: URL, BLEU: 32.9, chr-F: 0.545
testset: URL, BLEU: 38.6, chr-F: 0.563
testset: URL, BLEU: 2.3, chr-F: 0.299
testset: URL, BLEU: 33.3, chr-F: 0.548
testset: URL, BLEU: 37.9, chr-F: 0.602
testset: URL, BLEU: 9.8, chr-F: 0.289
testset: URL, BLEU: 38.0, chr-F: 0.718
testset: URL, BLEU: 31.8, chr-F: 0.528
testset: URL, BLEU: 31.7, chr-F: 0.548
testset: URL, BLEU: 28.1, chr-F: 0.484
testset: URL, BLEU: 38.9, chr-F: 0.596
testset: URL, BLEU: 38.6, chr-F: 0.589
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 36.0, chr-F: 0.557
testset: URL, BLEU: 8.1, chr-F: 0.441
testset: URL, BLEU: 8.9, chr-F: 0.439
testset: URL, BLEU: 8.8, chr-F: 0.288
testset: URL, BLEU: 26.1, chr-F: 0.414
testset: URL, BLEU: 25.5, chr-F: 0.440
testset: URL, BLEU: 30.1, chr-F: 0.449
testset: URL, BLEU: 12.6, chr-F: 0.412
testset: URL, BLEU: 9.9, chr-F: 0.416
testset: URL, BLEU: 8.4, chr-F: 0.289
testset: URL, BLEU: 21.2, chr-F: 0.395
testset: URL, BLEU: 25.9, chr-F: 0.384
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 10.4, chr-F: 0.376
testset: URL, BLEU: 18.1, chr-F: 0.373
testset: URL, BLEU: 24.4, chr-F: 0.467
testset: URL, BLEU: 42.9, chr-F: 0.583
testset: URL, BLEU: 19.5, chr-F: 0.444
testset: URL, BLEU: 11.6, chr-F: 0.323
testset: URL, BLEU: 22.1, chr-F: 0.398
testset: URL, BLEU: 32.1, chr-F: 0.386
testset: URL, BLEU: 21.9, chr-F: 0.407
testset: URL, BLEU: 29.3, chr-F: 0.476
testset: URL, BLEU: 40.5, chr-F: 0.708
testset: URL, BLEU: 0.0, chr-F: 0.034
testset: URL, BLEU: 38.1, chr-F: 0.582
testset: URL, BLEU: 31.8, chr-F: 0.511
testset: URL, BLEU: 29.8, chr-F: 0.483
testset: URL, BLEU: 39.8, chr-F: 0.336
testset: URL, BLEU: 26.3, chr-F: 0.441
testset: URL, BLEU: 27.3, chr-F: 0.469
testset: URL, BLEU: 1.9, chr-F: 0.047
testset: URL, BLEU: 28.9, chr-F: 0.501
testset: URL, BLEU: 2.6, chr-F: 0.135
testset: URL, BLEU: 59.6, chr-F: 0.740
testset: URL, BLEU: 0.1, chr-F: 0.012
testset: URL, BLEU: 40.2, chr-F: 0.566
testset: URL, BLEU: 19.7, chr-F: 0.358
testset: URL, BLEU: 17.4, chr-F: 0.465
testset: URL, BLEU: 18.0, chr-F: 0.386
testset: URL, BLEU: 30.7, chr-F: 0.496
testset: URL, BLEU: 10.7, chr-F: 0.133
testset: URL, BLEU: 38.1, chr-F: 0.539
testset: URL, BLEU: 53.2, chr-F: 0.676
testset: URL, BLEU: 3.8, chr-F: 0.125
testset: URL, BLEU: 3.4, chr-F: 0.252
testset: URL, BLEU: 24.2, chr-F: 0.460
testset: URL, BLEU: 12.1, chr-F: 0.427
testset: URL, BLEU: 4.7, chr-F: 0.287
testset: URL, BLEU: 27.8, chr-F: 0.482
testset: URL, BLEU: 40.6, chr-F: 0.608
testset: URL, BLEU: 23.1, chr-F: 0.450
testset: URL, BLEU: 0.8, chr-F: 0.060
testset: URL, BLEU: 10.1, chr-F: 0.375
testset: URL, BLEU: 38.9, chr-F: 0.577
testset: URL, BLEU: 31.7, chr-F: 0.539
testset: URL, BLEU: 0.2, chr-F: 0.061
testset: URL, BLEU: 31.5, chr-F: 0.539
testset: URL, BLEU: 47.4, chr-F: 0.633
testset: URL, BLEU: 6.4, chr-F: 0.247
testset: URL, BLEU: 4.2, chr-F: 0.236
testset: URL, BLEU: 46.6, chr-F: 0.642
testset: URL, BLEU: 20.0, chr-F: 0.409
testset: URL, BLEU: 7.8, chr-F: 0.312
testset: URL, BLEU: 36.3, chr-F: 0.577
testset: URL, BLEU: 1.1, chr-F: 0.030
testset: URL, BLEU: 39.4, chr-F: 0.595
testset: URL, BLEU: 18.5, chr-F: 0.408
testset: URL, BLEU: 1.9, chr-F: 0.160
testset: URL, BLEU: 1.0, chr-F: 0.178
testset: URL, BLEU: 7.1, chr-F: 0.320
testset: URL, BLEU: 29.0, chr-F: 0.511
testset: URL, BLEU: 0.2, chr-F: 0.107
testset: URL, BLEU: 20.7, chr-F: 0.475
testset: URL, BLEU: 20.6, chr-F: 0.373
testset: URL, BLEU: 14.3, chr-F: 0.409
testset: URL, BLEU: 13.3, chr-F: 0.378
testset: URL, BLEU: 37.8, chr-F: 0.578
testset: URL, BLEU: 35.7, chr-F: 0.578
testset: URL, BLEU: 11.0, chr-F: 0.369
testset: URL, BLEU: 1.2, chr-F: 0.010
testset: URL, BLEU: 0.2, chr-F: 0.110
testset: URL, BLEU: 25.9, chr-F: 0.507
testset: URL, BLEU: 36.8, chr-F: 0.597
testset: URL, BLEU: 34.3, chr-F: 0.574
testset: URL, BLEU: 28.5, chr-F: 0.494
testset: URL, BLEU: 11.7, chr-F: 0.364
testset: URL, BLEU: 46.3, chr-F: 0.653
testset: URL, BLEU: 21.9, chr-F: 0.418
testset: URL, BLEU: 37.7, chr-F: 0.562
testset: URL, BLEU: 33.1, chr-F: 0.538
testset: URL, BLEU: 0.8, chr-F: 0.095
testset: URL, BLEU: 10.3, chr-F: 0.280
testset: URL, BLEU: 3.9, chr-F: 0.098
testset: URL, BLEU: 5.0, chr-F: 0.217
testset: URL, BLEU: 12.2, chr-F: 0.357
testset: URL, BLEU: 4.1, chr-F: 0.237
testset: URL, BLEU: 5.3, chr-F: 0.299
testset: URL, BLEU: 15.3, chr-F: 0.322
testset: URL, BLEU: 0.0, chr-F: 0.095
testset: URL, BLEU: 11.3, chr-F: 0.272
testset: URL, BLEU: 0.0, chr-F: 0.069
testset: URL, BLEU: 35.4, chr-F: 0.540
testset: URL, BLEU: 24.3, chr-F: 0.509
testset: URL, BLEU: 12.0, chr-F: 0.226
testset: URL, BLEU: 10.0, chr-F: 0.205
testset: URL, BLEU: 5.5, chr-F: 0.048
testset: URL, BLEU: 16.5, chr-F: 0.236
testset: URL, BLEU: 7.6, chr-F: 0.081
testset: URL, BLEU: 1.6, chr-F: 0.013
testset: URL, BLEU: 11.4, chr-F: 0.362
testset: URL, BLEU: 0.2, chr-F: 0.067
testset: URL, BLEU: 6.1, chr-F: 0.240
testset: URL, BLEU: 1.9, chr-F: 0.161
testset: URL, BLEU: 3.3, chr-F: 0.155
testset: URL, BLEU: 31.9, chr-F: 0.184
testset: URL, BLEU: 5.0, chr-F: 0.230
testset: URL, BLEU: 37.0, chr-F: 0.295
testset: URL, BLEU: 1.3, chr-F: 0.184
testset: URL, BLEU: 39.1, chr-F: 0.426
testset: URL, BLEU: 4.3, chr-F: 0.206
testset: URL, BLEU: 2.1, chr-F: 0.164
testset: URL, BLEU: 1.4, chr-F: 0.046
testset: URL, BLEU: 9.7, chr-F: 0.330
testset: URL, BLEU: 35.4, chr-F: 0.529
testset: URL, BLEU: 33.1, chr-F: 0.604
testset: URL, BLEU: 15.4, chr-F: 0.325
testset: URL, BLEU: 19.3, chr-F: 0.405
testset: URL, BLEU: 23.1, chr-F: 0.421
testset: URL, BLEU: 2.2, chr-F: 0.173
testset: URL, BLEU: 5.2, chr-F: 0.194
testset: URL, BLEU: 26.3, chr-F: 0.405
testset: URL, BLEU: 0.0, chr-F: 0.170
testset: URL, BLEU: 21.4, chr-F: 0.347
testset: URL, BLEU: 1.2, chr-F: 0.058
testset: URL, BLEU: 22.7, chr-F: 0.479
testset: URL, BLEU: 2.4, chr-F: 0.190
testset: URL, BLEU: 3.4, chr-F: 0.239
testset: URL, BLEU: 45.5, chr-F: 0.580
testset: URL, BLEU: 23.0, chr-F: 0.690
testset: URL, BLEU: 33.5, chr-F: 0.449
testset: URL, BLEU: 66.9, chr-F: 0.951
testset: URL, BLEU: 0.0, chr-F: 0.076
testset: URL, BLEU: 27.5, chr-F: 0.448
testset: URL, BLEU: 78.3, chr-F: 0.693
testset: URL, BLEU: 6.5, chr-F: 0.308
testset: URL, BLEU: 0.0, chr-F: 0.179
testset: URL, BLEU: 59.5, chr-F: 0.602
testset: URL, BLEU: 37.0, chr-F: 0.553
testset: URL, BLEU: 66.9, chr-F: 0.783
testset: URL, BLEU: 8.1, chr-F: 0.282
testset: URL, BLEU: 4.8, chr-F: 0.212
testset: URL, BLEU: 5.0, chr-F: 0.237
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 0.9, chr-F: 0.068
testset: URL, BLEU: 10.6, chr-F: 0.284
testset: URL, BLEU: 27.5, chr-F: 0.481
testset: URL, BLEU: 15.6, chr-F: 0.331
testset: URL, BLEU: 2.9, chr-F: 0.203
testset: URL, BLEU: 29.4, chr-F: 0.479
testset: URL, BLEU: 19.9, chr-F: 0.391
testset: URL, BLEU: 20.5, chr-F: 0.396
testset: URL, BLEU: 1.0, chr-F: 0.082
testset: URL, BLEU: 7.9, chr-F: 0.407
testset: URL, BLEU: 9.3, chr-F: 0.286
testset: URL, BLEU: 7.1, chr-F: 0.192
testset: URL, BLEU: 3.6, chr-F: 0.150
testset: URL, BLEU: 0.2, chr-F: 0.001
testset: URL, BLEU: 15.1, chr-F: 0.322
testset: URL, BLEU: 8.3, chr-F: 0.108
testset: URL, BLEU: 20.7, chr-F: 0.415
testset: URL, BLEU: 7.9, chr-F: 0.260
testset: URL, BLEU: 0.2, chr-F: 0.087
testset: URL, BLEU: 5.6, chr-F: 0.301
testset: URL, BLEU: 10.2, chr-F: 0.352
testset: URL, BLEU: 24.3, chr-F: 0.444
testset: URL, BLEU: 14.5, chr-F: 0.338
testset: URL, BLEU: 0.1, chr-F: 0.006
testset: URL, BLEU: 21.8, chr-F: 0.412
testset: URL, BLEU: 12.2, chr-F: 0.336
testset: URL, BLEU: 12.7, chr-F: 0.343
testset: URL, BLEU: 16.6, chr-F: 0.362
testset: URL, BLEU: 3.2, chr-F: 0.215
testset: URL, BLEU: 18.9, chr-F: 0.414
testset: URL, BLEU: 53.4, chr-F: 0.708
testset: URL, BLEU: 14.0, chr-F: 0.343
testset: URL, BLEU: 2.1, chr-F: 0.182
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 34.5, chr-F: 0.540
testset: URL, BLEU: 33.6, chr-F: 0.520
testset: URL, BLEU: 40.5, chr-F: 0.598
testset: URL, BLEU: 72.7, chr-F: 0.770
testset: URL, BLEU: 30.5, chr-F: 0.570
testset: URL, BLEU: 5.7, chr-F: 0.362
testset: URL, BLEU: 23.5, chr-F: 0.504
testset: URL, BLEU: 13.7, chr-F: 0.550
testset: URL, BLEU: 37.6, chr-F: 0.551
testset: URL, BLEU: 32.5, chr-F: 0.517
testset: URL, BLEU: 8.6, chr-F: 0.483
testset: URL, BLEU: 26.6, chr-F: 0.511
testset: URL, BLEU: 95.1, chr-F: 0.958
testset: URL, BLEU: 9.0, chr-F: 0.488
testset: URL, BLEU: 6.8, chr-F: 0.251
testset: URL, BLEU: 12.2, chr-F: 0.329
testset: URL, BLEU: 10.4, chr-F: 0.366
testset: URL, BLEU: 25.7, chr-F: 0.472
testset: URL, BLEU: 37.5, chr-F: 0.551
testset: URL, BLEU: 32.1, chr-F: 0.489
testset: URL, BLEU: 22.3, chr-F: 0.460
testset: URL, BLEU: 7.4, chr-F: 0.195
testset: URL, BLEU: 22.6, chr-F: 0.378
testset: URL, BLEU: 9.7, chr-F: 0.282
testset: URL, BLEU: 7.2, chr-F: 0.374
testset: URL, BLEU: 30.9, chr-F: 0.529
testset: URL, BLEU: 25.0, chr-F: 0.439
testset: URL, BLEU: 30.6, chr-F: 0.504
testset: URL, BLEU: 8.6, chr-F: 0.331
testset: URL, BLEU: 32.9, chr-F: 0.516
testset: URL, BLEU: 19.6, chr-F: 0.371
testset: URL, BLEU: 6.5, chr-F: 0.360
testset: URL, BLEU: 13.7, chr-F: 0.310
testset: URL, BLEU: 13.1, chr-F: 0.368
testset: URL, BLEU: 3.4, chr-F: 0.064
testset: URL, BLEU: 9.3, chr-F: 0.351
testset: URL, BLEU: 22.3, chr-F: 0.323
testset: URL, BLEU: 10.9, chr-F: 0.333
testset: URL, BLEU: 49.5, chr-F: 0.589
testset: URL, BLEU: 0.0, chr-F: 0.051
testset: URL, BLEU: 9.7, chr-F: 0.353
testset: URL, BLEU: 65.1, chr-F: 0.463
testset: URL, BLEU: 35.6, chr-F: 0.533
testset: URL, BLEU: 33.7, chr-F: 0.448
testset: URL, BLEU: 24.3, chr-F: 0.451
testset: URL, BLEU: 23.4, chr-F: 0.621
testset: URL, BLEU: 0.5, chr-F: 0.104
testset: URL, BLEU: 14.2, chr-F: 0.412
testset: URL, BLEU: 7.8, chr-F: 0.179
testset: URL, BLEU: 7.6, chr-F: 0.106
testset: URL, BLEU: 32.4, chr-F: 0.488
testset: URL, BLEU: 27.8, chr-F: 0.599
testset: URL, BLEU: 12.7, chr-F: 0.319
testset: URL, BLEU: 18.0, chr-F: 0.392
testset: URL, BLEU: 15.6, chr-F: 0.458
testset: URL, BLEU: 0.6, chr-F: 0.065
testset: URL, BLEU: 32.5, chr-F: 0.403
testset: URL, BLEU: 1.4, chr-F: 0.236
testset: URL, BLEU: 49.8, chr-F: 0.429
testset: URL, BLEU: 18.6, chr-F: 0.460
testset: URL, BLEU: 5.1, chr-F: 0.230
testset: URL, BLEU: 14.2, chr-F: 0.379
testset: URL, BLEU: 20.0, chr-F: 0.422
testset: URL, BLEU: 40.7, chr-F: 0.470
testset: URL, BLEU: 7.3, chr-F: 0.407
testset: URL, BLEU: 35.4, chr-F: 0.638
testset: URL, BLEU: 49.0, chr-F: 0.615
testset: URL, BLEU: 42.7, chr-F: 0.655
testset: URL, BLEU: 9.7, chr-F: 0.362
testset: URL, BLEU: 61.6, chr-F: 0.819
testset: URL, BLEU: 15.0, chr-F: 0.506
testset: URL, BLEU: 31.0, chr-F: 0.548
testset: URL, BLEU: 35.8, chr-F: 0.524
testset: URL, BLEU: 30.2, chr-F: 0.486
testset: URL, BLEU: 32.5, chr-F: 0.589
testset: URL, BLEU: 16.6, chr-F: 0.557
testset: URL, BLEU: 11.6, chr-F: 0.395
testset: URL, BLEU: 42.7, chr-F: 0.680
testset: URL, BLEU: 53.7, chr-F: 0.833
testset: URL, BLEU: 10.1, chr-F: 0.492
testset: URL, BLEU: 9.7, chr-F: 0.196
testset: URL, BLEU: 24.7, chr-F: 0.727
testset: URL, BLEU: 43.2, chr-F: 0.601
testset: URL, BLEU: 23.6, chr-F: 0.361
testset: URL, BLEU: 42.7, chr-F: 0.864
testset: URL, BLEU: 3.4, chr-F: 0.323
testset: URL, BLEU: 17.1, chr-F: 0.418
testset: URL, BLEU: 1.8, chr-F: 0.199
testset: URL, BLEU: 11.9, chr-F: 0.258
testset: URL, BLEU: 3.4, chr-F: 0.115
testset: URL, BLEU: 0.0, chr-F: 0.000
testset: URL, BLEU: 23.5, chr-F: 0.470
testset: URL, BLEU: 19.7, chr-F: 0.490
testset: URL, BLEU: 27.8, chr-F: 0.472
testset: URL, BLEU: 2.0, chr-F: 0.232
testset: URL, BLEU: 5.9, chr-F: 0.241
testset: URL, BLEU: 25.9, chr-F: 0.465
testset: URL, BLEU: 1.7, chr-F: 0.195
testset: URL, BLEU: 3.4, chr-F: 0.228
testset: URL, BLEU: 23.4, chr-F: 0.481
testset: URL, BLEU: 11.5, chr-F: 0.304
testset: URL, BLEU: 5.8, chr-F: 0.243
testset: URL, BLEU: 20.9, chr-F: 0.442
testset: URL, BLEU: 14.8, chr-F: 0.431
testset: URL, BLEU: 83.8, chr-F: 0.946
testset: URL, BLEU: 9.1, chr-F: 0.349
testset: URL, BLEU: 15.4, chr-F: 0.385
testset: URL, BLEU: 3.4, chr-F: 0.195
testset: URL, BLEU: 18.8, chr-F: 0.401
testset: URL, BLEU: 0.0, chr-F: 0.056
testset: URL, BLEU: 22.6, chr-F: 0.451
testset: URL, BLEU: 5.7, chr-F: 0.267
testset: URL, BLEU: 8.0, chr-F: 0.102
testset: URL, BLEU: 30.8, chr-F: 0.509
testset: URL, BLEU: 22.8, chr-F: 0.416
testset: URL, BLEU: 7.0, chr-F: 0.321
testset: URL, BLEU: 35.4, chr-F: 0.561
testset: URL, BLEU: 42.7, chr-F: 0.835
testset: URL, BLEU: 38.3, chr-F: 0.491
testset: URL, BLEU: 18.5, chr-F: 0.399
testset: URL, BLEU: 32.6, chr-F: 0.552
testset: URL, BLEU: 18.1, chr-F: 0.426
testset: URL, BLEU: 28.9, chr-F: 0.480
testset: URL, BLEU: 6.9, chr-F: 0.198
testset: URL, BLEU: 6.6, chr-F: 0.187
testset: URL, BLEU: 31.9, chr-F: 0.498
testset: URL, BLEU: 0.5, chr-F: 0.000
testset: URL, BLEU: 0.0, chr-F: 0.023
testset: URL, BLEU: 1.2, chr-F: 0.148
testset: URL, BLEU: 28.5, chr-F: 0.505
testset: URL, BLEU: 7.8, chr-F: 0.164
testset: URL, BLEU: 38.2, chr-F: 0.584
testset: URL, BLEU: 42.8, chr-F: 0.612
testset: URL, BLEU: 15.3, chr-F: 0.405
testset: URL, BLEU: 26.0, chr-F: 0.447
testset: URL, BLEU: 0.0, chr-F: 0.353
testset: URL, BLEU: 24.3, chr-F: 0.440
testset: URL, BLEU: 31.7, chr-F: 0.527
testset: URL, BLEU: 0.1, chr-F: 0.080
testset: URL, BLEU: 20.1, chr-F: 0.464
testset: URL, BLEU: 42.8, chr-F: 0.365
testset: URL, BLEU: 2.1, chr-F: 0.161
testset: URL, BLEU: 50.1, chr-F: 0.670
testset: URL, BLEU: 42.7, chr-F: 0.835
testset: URL, BLEU: 17.5, chr-F: 0.410
testset: URL, BLEU: 3.2, chr-F: 0.189
testset: URL, BLEU: 28.7, chr-F: 0.468
testset: URL, BLEU: 31.9, chr-F: 0.546
testset: URL, BLEU: 24.4, chr-F: 0.504
testset: URL, BLEU: 0.6, chr-F: 0.048
testset: URL, BLEU: 49.1, chr-F: 0.660
testset: URL, BLEU: 38.3, chr-F: 0.589
testset: URL, BLEU: 0.2, chr-F: 0.084
testset: URL, BLEU: 35.3, chr-F: 0.528
testset: URL, BLEU: 42.4, chr-F: 0.602
testset: URL, BLEU: 6.1, chr-F: 0.269
testset: URL, BLEU: 18.6, chr-F: 0.459
testset: URL, BLEU: 35.7, chr-F: 0.549
testset: URL, BLEU: 2.8, chr-F: 0.099
testset: URL, BLEU: 19.2, chr-F: 0.438
testset: URL, BLEU: 35.0, chr-F: 0.576
testset: URL, BLEU: 0.5, chr-F: 0.129
testset: URL, BLEU: 26.8, chr-F: 0.418
testset: URL, BLEU: 35.3, chr-F: 0.580
testset: URL, BLEU: 4.2, chr-F: 0.147
testset: URL, BLEU: 0.7, chr-F: 0.101
testset: URL, BLEU: 6.7, chr-F: 0.314
testset: URL, BLEU: 17.6, chr-F: 0.384
testset: URL, BLEU: 0.0, chr-F: 0.238
testset: URL, BLEU: 3.6, chr-F: 0.210
testset: URL, BLEU: 15.9, chr-F: 0.405
testset: URL, BLEU: 42.4, chr-F: 0.618
testset: URL, BLEU: 9.0, chr-F: 0.306
testset: URL, BLEU: 38.9, chr-F: 0.531
testset: URL, BLEU: 25.8, chr-F: 0.498
testset: URL, BLEU: 31.7, chr-F: 0.535
testset: URL, BLEU: 26.6, chr-F: 0.495
testset: URL, BLEU: 30.0, chr-F: 0.512
testset: URL, BLEU: 4.3, chr-F: 0.299
testset: URL, BLEU: 35.0, chr-F: 0.560
testset: URL, BLEU: 1.6, chr-F: 0.201
testset: URL, BLEU: 72.2, chr-F: 0.801
testset: URL, BLEU: 5.0, chr-F: 0.129
testset: URL, BLEU: 26.2, chr-F: 0.481
testset: URL, BLEU: 3.5, chr-F: 0.133
testset: URL, BLEU: 11.5, chr-F: 0.293
testset: URL, BLEU: 30.3, chr-F: 0.471
testset: URL, BLEU: 90.1, chr-F: 0.839
testset: URL, BLEU: 50.0, chr-F: 0.638
testset: URL, BLEU: 42.2, chr-F: 0.467
testset: URL, BLEU: 3.2, chr-F: 0.188
testset: URL, BLEU: 35.4, chr-F: 0.529
testset: URL, BLEU: 38.0, chr-F: 0.627
testset: URL, BLEU: 3.2, chr-F: 0.072
testset: URL, BLEU: 14.7, chr-F: 0.465
testset: URL, BLEU: 59.0, chr-F: 0.757
testset: URL, BLEU: 32.4, chr-F: 0.560
testset: URL, BLEU: 29.9, chr-F: 0.507
testset: URL, BLEU: 40.8, chr-F: 0.585
testset: URL, BLEU: 4.2, chr-F: 0.303
testset: URL, BLEU: 10.0, chr-F: 0.345
testset: URL, BLEU: 38.4, chr-F: 0.572
testset: URL, BLEU: 18.7, chr-F: 0.375
testset: URL, BLEU: 10.7, chr-F: 0.015
testset: URL, BLEU: 21.7, chr-F: 0.465
testset: URL, BLEU: 14.8, chr-F: 0.307
testset: URL, BLEU: 23.2, chr-F: 0.445
testset: URL, BLEU: 35.2, chr-F: 0.594
testset: URL, BLEU: 10.7, chr-F: 0.037
testset: URL, BLEU: 6.6, chr-F: 0.370
testset: URL, BLEU: 3.6, chr-F: 0.261
testset: URL, BLEU: 12.2, chr-F: 0.404
testset: URL, BLEU: 8.0, chr-F: 0.442
testset: URL, BLEU: 20.3, chr-F: 0.466
testset: URL, BLEU: 39.1, chr-F: 0.598
testset: URL, BLEU: 49.0, chr-F: 0.698
testset: URL, BLEU: 26.3, chr-F: 0.515
testset: URL, BLEU: 31.0, chr-F: 0.543
testset: URL, BLEU: 28.0, chr-F: 0.475
testset: URL, BLEU: 28.1, chr-F: 0.513
testset: URL, BLEU: 1.2, chr-F: 0.193
testset: URL, BLEU: 38.2, chr-F: 0.598
testset: URL, BLEU: 58.8, chr-F: 0.741
testset: URL, BLEU: 29.1, chr-F: 0.515
testset: URL, BLEU: 42.6, chr-F: 0.473
testset: URL, BLEU: 11.2, chr-F: 0.346
testset: URL, BLEU: 13.4, chr-F: 0.331
testset: URL, BLEU: 5.3, chr-F: 0.206
testset: URL, BLEU: 19.6, chr-F: 0.423
testset: URL, BLEU: 24.5, chr-F: 0.493
testset: URL, BLEU: 22.5, chr-F: 0.408
testset: URL, BLEU: 8.8, chr-F: 0.322
testset: URL, BLEU: 16.4, chr-F: 0.387
testset: URL, BLEU: 20.4, chr-F: 0.442
testset: URL, BLEU: 66.9, chr-F: 0.968
testset: URL, BLEU: 3.9, chr-F: 0.168
testset: URL, BLEU: 9.1, chr-F: 0.175
testset: URL, BLEU: 5.8, chr-F: 0.256
testset: URL, BLEU: 8.4, chr-F: 0.243
testset: URL, BLEU: 8.9, chr-F: 0.244
testset: URL, BLEU: 8.1, chr-F: 0.297
testset: URL, BLEU: 1.2, chr-F: 0.207
testset: URL, BLEU: 11.6, chr-F: 0.338
testset: URL, BLEU: 8.2, chr-F: 0.234
testset: URL, BLEU: 7.8, chr-F: 0.331
testset: URL, BLEU: 6.4, chr-F: 0.217
testset: URL, BLEU: 5.8, chr-F: 0.230
testset: URL, BLEU: 10.8, chr-F: 0.279
testset: URL, BLEU: 6.0, chr-F: 0.225
testset: URL, BLEU: 6.1, chr-F: 0.256
testset: URL, BLEU: 0.0, chr-F: 0.626
testset: URL, BLEU: 45.7, chr-F: 0.586
testset: URL, BLEU: 43.9, chr-F: 0.589
testset: URL, BLEU: 0.0, chr-F: 0.347
testset: URL, BLEU: 41.9, chr-F: 0.587
testset: URL, BLEU: 14.4, chr-F: 0.365
testset: URL, BLEU: 5.8, chr-F: 0.274
testset: URL, BLEU: 33.0, chr-F: 0.474
testset: URL, BLEU: 36.1, chr-F: 0.479
testset: URL, BLEU: 0.7, chr-F: 0.026
testset: URL, BLEU: 13.1, chr-F: 0.310
testset: URL, BLEU: 8.8, chr-F: 0.296
testset: URL, BLEU: 13.0, chr-F: 0.309
testset: URL, BLEU: 10.0, chr-F: 0.327
testset: URL, BLEU: 15.2, chr-F: 0.304
testset: URL, BLEU: 10.4, chr-F: 0.352
testset: URL, BLEU: 40.2, chr-F: 0.589
testset: URL, BLEU: 24.8, chr-F: 0.503
testset: URL, BLEU: 29.4, chr-F: 0.508
testset: URL, BLEU: 20.3, chr-F: 0.416
testset: URL, BLEU: 28.0, chr-F: 0.489
testset: URL, BLEU: 1.3, chr-F: 0.052
testset: URL, BLEU: 7.0, chr-F: 0.347
testset: URL, BLEU: 37.0, chr-F: 0.551
testset: URL, BLEU: 29.1, chr-F: 0.508
testset: URL, BLEU: 0.8, chr-F: 0.070
testset: URL, BLEU: 32.3, chr-F: 0.519
testset: URL, BLEU: 34.1, chr-F: 0.531
testset: URL, BLEU: 1.2, chr-F: 0.234
testset: URL, BLEU: 6.5, chr-F: 0.208
testset: URL, BLEU: 30.8, chr-F: 0.510
testset: URL, BLEU: 7.2, chr-F: 0.287
testset: URL, BLEU: 14.6, chr-F: 0.301
testset: URL, BLEU: 18.4, chr-F: 0.498
testset: URL, BLEU: 31.8, chr-F: 0.546
testset: URL, BLEU: 3.5, chr-F: 0.193
testset: URL, BLEU: 11.4, chr-F: 0.336
testset: URL, BLEU: 28.5, chr-F: 0.522
testset: URL, BLEU: 2.6, chr-F: 0.134
testset: URL, BLEU: 16.0, chr-F: 0.265
testset: URL, BLEU: 7.2, chr-F: 0.311
testset: URL, BLEU: 22.9, chr-F: 0.450
testset: URL, BLEU: 21.2, chr-F: 0.493
testset: URL, BLEU: 38.0, chr-F: 0.718
testset: URL, BLEU: 2.2, chr-F: 0.173
testset: URL, BLEU: 14.4, chr-F: 0.370
testset: URL, BLEU: 30.6, chr-F: 0.501
testset: URL, BLEU: 33.3, chr-F: 0.536
testset: URL, BLEU: 4.0, chr-F: 0.282
testset: URL, BLEU: 0.4, chr-F: 0.005
testset: URL, BLEU: 1.3, chr-F: 0.032
testset: URL, BLEU: 25.9, chr-F: 0.491
testset: URL, BLEU: 0.0, chr-F: 0.083
testset: URL, BLEU: 26.5, chr-F: 0.487
testset: URL, BLEU: 34.7, chr-F: 0.550
testset: URL, BLEU: 7.4, chr-F: 0.256
testset: URL, BLEU: 30.7, chr-F: 0.516
testset: URL, BLEU: 35.0, chr-F: 0.530
testset: URL, BLEU: 32.8, chr-F: 0.538
testset: URL, BLEU: 5.6, chr-F: 0.381
testset: URL, BLEU: 4.8, chr-F: 0.146
testset: URL, BLEU: 48.1, chr-F: 0.653
testset: URL, BLEU: 8.4, chr-F: 0.213
testset: URL, BLEU: 42.7, chr-F: 0.835
testset: URL, BLEU: 9.7, chr-F: 0.539
testset: URL, BLEU: 41.5, chr-F: 0.569
testset: URL, BLEU: 36.9, chr-F: 0.612
testset: URL, BLEU: 29.0, chr-F: 0.526
testset: URL, BLEU: 0.8, chr-F: 0.049
testset: URL, BLEU: 51.4, chr-F: 0.668
testset: URL, BLEU: 30.8, chr-F: 0.532
testset: URL, BLEU: 33.8, chr-F: 0.556
testset: URL, BLEU: 44.5, chr-F: 0.622
testset: URL, BLEU: 10.7, chr-F: 0.190
testset: URL, BLEU: 4.5, chr-F: 0.273
testset: URL, BLEU: 43.0, chr-F: 0.625
testset: URL, BLEU: 8.9, chr-F: 0.365
testset: URL, BLEU: 16.0, chr-F: 0.079
testset: URL, BLEU: 12.1, chr-F: 0.315
testset: URL, BLEU: 49.2, chr-F: 0.700
testset: URL, BLEU: 0.1, chr-F: 0.004
testset: URL, BLEU: 39.2, chr-F: 0.575
testset: URL, BLEU: 15.5, chr-F: 0.387
testset: URL, BLEU: 39.9, chr-F: 0.637
testset: URL, BLEU: 3.0, chr-F: 0.133
testset: URL, BLEU: 0.6, chr-F: 0.172
testset: URL, BLEU: 5.4, chr-F: 0.325
testset: URL, BLEU: 18.8, chr-F: 0.418
testset: URL, BLEU: 16.8, chr-F: 0.569
testset: URL, BLEU: 27.3, chr-F: 0.571
testset: URL, BLEU: 7.6, chr-F: 0.327
testset: URL, BLEU: 30.5, chr-F: 0.559
testset: URL, BLEU: 14.2, chr-F: 0.370
testset: URL, BLEU: 35.6, chr-F: 0.558
testset: URL, BLEU: 38.0, chr-F: 0.587
testset: URL, BLEU: 25.5, chr-F: 0.510
testset: URL, BLEU: 5.5, chr-F: 0.058
testset: URL, BLEU: 32.0, chr-F: 0.557
testset: URL, BLEU: 26.8, chr-F: 0.493
testset: URL, BLEU: 48.7, chr-F: 0.686
testset: URL, BLEU: 43.4, chr-F: 0.612
testset: URL, BLEU: 27.5, chr-F: 0.500
testset: URL, BLEU: 9.3, chr-F: 0.293
testset: URL, BLEU: 2.2, chr-F: 0.183
testset: URL, BLEU: 1.3, chr-F: 0.179
testset: URL, BLEU: 2.3, chr-F: 0.183
testset: URL, BLEU: 0.5, chr-F: 0.173
testset: URL, BLEU: 3.4, chr-F: 0.200
testset: URL, BLEU: 1.6, chr-F: 0.166
testset: URL, BLEU: 8.3, chr-F: 0.311
testset: URL, BLEU: 9.5, chr-F: 0.361
testset: URL, BLEU: 8.8, chr-F: 0.415
testset: URL, BLEU: 21.4, chr-F: 0.347
testset: URL, BLEU: 13.3, chr-F: 0.434
testset: URL, BLEU: 2.9, chr-F: 0.204
testset: URL, BLEU: 5.3, chr-F: 0.243
testset: URL, BLEU: 6.5, chr-F: 0.194
testset: URL, BLEU: 30.2, chr-F: 0.667
testset: URL, BLEU: 35.4, chr-F: 0.493
testset: URL, BLEU: 23.6, chr-F: 0.542
testset: URL, BLEU: 10.6, chr-F: 0.344
testset: URL, BLEU: 12.7, chr-F: 0.652
testset: URL, BLEU: 32.1, chr-F: 0.524
testset: URL, BLEU: 38.4, chr-F: 0.566
testset: URL, BLEU: 5.3, chr-F: 0.351
testset: URL, BLEU: 7.3, chr-F: 0.338
testset: URL, BLEU: 38.0, chr-F: 0.571
testset: URL, BLEU: 10.7, chr-F: 0.116
testset: URL, BLEU: 36.2, chr-F: 0.587
testset: URL, BLEU: 2.4, chr-F: 0.233
testset: URL, BLEU: 6.5, chr-F: 0.368
testset: URL, BLEU: 27.5, chr-F: 0.484
testset: URL, BLEU: 0.8, chr-F: 0.082
testset: URL, BLEU: 9.7, chr-F: 0.168
testset: URL, BLEU: 32.5, chr-F: 0.522
testset: URL, BLEU: 45.2, chr-F: 0.656
testset: URL, BLEU: 32.2, chr-F: 0.554
testset: URL, BLEU: 33.6, chr-F: 0.577
testset: URL, BLEU: 33.3, chr-F: 0.536
testset: URL, BLEU: 19.0, chr-F: 0.113
testset: URL, BLEU: 40.8, chr-F: 0.605
testset: URL, BLEU: 12.7, chr-F: 0.288
testset: URL, BLEU: 19.7, chr-F: 0.285
testset: URL, BLEU: 18.7, chr-F: 0.359
testset: URL, BLEU: 30.1, chr-F: 0.455
testset: URL, BLEU: 34.7, chr-F: 0.540
testset: URL, BLEU: 0.0, chr-F: 0.042
testset: URL, BLEU: 42.7, chr-F: 0.835
testset: URL, BLEU: 35.0, chr-F: 0.587
testset: URL, BLEU: 30.8, chr-F: 0.534
testset: URL, BLEU: 27.9, chr-F: 0.512
testset: URL, BLEU: 33.8, chr-F: 0.537
testset: URL, BLEU: 0.4, chr-F: 0.038
testset: URL, BLEU: 7.6, chr-F: 0.384
testset: URL, BLEU: 37.9, chr-F: 0.559
testset: URL, BLEU: 31.3, chr-F: 0.528
testset: URL, BLEU: 16.0, chr-F: 0.060
testset: URL, BLEU: 29.0, chr-F: 0.512
testset: URL, BLEU: 37.6, chr-F: 0.553
testset: URL, BLEU: 1.6, chr-F: 0.138
testset: URL, BLEU: 4.2, chr-F: 0.278
testset: URL, BLEU: 33.0, chr-F: 0.524
testset: URL, BLEU: 16.3, chr-F: 0.308
testset: URL, BLEU: 10.7, chr-F: 0.045
testset: URL, BLEU: 22.3, chr-F: 0.427
testset: URL, BLEU: 5.9, chr-F: 0.310
testset: URL, BLEU: 20.6, chr-F: 0.459
testset: URL, BLEU: 1.5, chr-F: 0.152
testset: URL, BLEU: 31.0, chr-F: 0.546
testset: URL, BLEU: 5.5, chr-F: 0.326
testset: URL, BLEU: 12.7, chr-F: 0.365
testset: URL, BLEU: 9.0, chr-F: 0.320
testset: URL, BLEU: 26.6, chr-F: 0.495
testset: URL, BLEU: 5.6, chr-F: 0.210
testset: URL, BLEU: 1.0, chr-F: 0.169
testset: URL, BLEU: 7.9, chr-F: 0.328
testset: URL, BLEU: 31.1, chr-F: 0.519
testset: URL, BLEU: 22.0, chr-F: 0.489
testset: URL, BLEU: 19.4, chr-F: 0.263
testset: URL, BLEU: 19.0, chr-F: 0.217
testset: URL, BLEU: 38.5, chr-F: 0.662
testset: URL, BLEU: 6.6, chr-F: 0.305
testset: URL, BLEU: 11.5, chr-F: 0.350
testset: URL, BLEU: 31.1, chr-F: 0.517
testset: URL, BLEU: 31.2, chr-F: 0.528
testset: URL, BLEU: 4.9, chr-F: 0.261
testset: URL, BLEU: 7.3, chr-F: 0.325
testset: URL, BLEU: 0.0, chr-F: 0.008
testset: URL, BLEU: 4.8, chr-F: 0.198
testset: URL, BLEU: 31.3, chr-F: 0.540
testset: URL, BLEU: 24.5, chr-F: 0.476
testset: URL, BLEU: 25.7, chr-F: 0.492
testset: URL, BLEU: 20.7, chr-F: 0.400
testset: URL, BLEU: 30.9, chr-F: 0.526
testset: URL, BLEU: 32.0, chr-F: 0.507
testset: URL, BLEU: 41.1, chr-F: 0.622
testset: URL, BLEU: 7.1, chr-F: 0.367
testset: URL, BLEU: 4.7, chr-F: 0.253
testset: URL, BLEU: 2.5, chr-F: 0.167
testset: URL, BLEU: 11.7, chr-F: 0.217
testset: URL, BLEU: 3.9, chr-F: 0.224
testset: URL, BLEU: 40.7, chr-F: 0.420
testset: URL, BLEU: 2.1, chr-F: 0.134
testset: URL, BLEU: 3.4, chr-F: 0.244
testset: URL, BLEU: 17.2, chr-F: 0.310
testset: URL, BLEU: 32.8, chr-F: 0.524
testset: URL, BLEU: 5.7, chr-F: 0.254
testset: URL, BLEU: 5.3, chr-F: 0.023
testset: URL, BLEU: 3.5, chr-F: 0.237
testset: URL, BLEU: 11.9, chr-F: 0.335
testset: URL, BLEU: 23.7, chr-F: 0.300
testset: URL, BLEU: 0.0, chr-F: 0.146
testset: URL, BLEU: 14.1, chr-F: 0.313
testset: URL, BLEU: 33.2, chr-F: 0.528
testset: URL, BLEU: 33.4, chr-F: 0.518
testset: URL, BLEU: 29.9, chr-F: 0.489
testset: URL, BLEU: 19.5, chr-F: 0.405
testset: URL, BLEU: 28.6, chr-F: 0.499
testset: URL, BLEU: 5.5, chr-F: 0.296
testset: URL, BLEU: 18.0, chr-F: 0.546
testset: URL, BLEU: 18.0, chr-F: 0.452
testset: URL, BLEU: 20.3, chr-F: 0.406
testset: URL, BLEU: 33.1, chr-F: 0.541
testset: URL, BLEU: 12.4, chr-F: 0.348
testset: URL, BLEU: 33.4, chr-F: 0.519
testset: URL, BLEU: 32.9, chr-F: 0.503
testset: URL, BLEU: 14.8, chr-F: 0.095
testset: URL, BLEU: 30.1, chr-F: 0.471
testset: URL, BLEU: 12.7, chr-F: 0.377
testset: URL, BLEU: 46.9, chr-F: 0.624
testset: URL, BLEU: 1.1, chr-F: 0.143
testset: URL, BLEU: 21.6, chr-F: 0.446
testset: URL, BLEU: 28.1, chr-F: 0.526
testset: URL, BLEU: 22.8, chr-F: 0.466
testset: URL, BLEU: 16.9, chr-F: 0.442
testset: URL, BLEU: 30.8, chr-F: 0.510
testset: URL, BLEU: 49.1, chr-F: 0.696
testset: URL, BLEU: 27.2, chr-F: 0.497
testset: URL, BLEU: 0.5, chr-F: 0.049
testset: URL, BLEU: 5.3, chr-F: 0.204
testset: URL, BLEU: 22.4, chr-F: 0.476
testset: URL, BLEU: 39.3, chr-F: 0.581
testset: URL, BLEU: 30.9, chr-F: 0.531
testset: URL, BLEU: 0.7, chr-F: 0.109
testset: URL, BLEU: 0.9, chr-F: 0.060
testset: URL, BLEU: 28.9, chr-F: 0.487
testset: URL, BLEU: 41.0, chr-F: 0.595
testset: URL, BLEU: 13.9, chr-F: 0.188
testset: URL, BLEU: 7.9, chr-F: 0.244
testset: URL, BLEU: 41.4, chr-F: 0.610
testset: URL, BLEU: 15.8, chr-F: 0.397
testset: URL, BLEU: 7.0, chr-F: 0.060
testset: URL, BLEU: 7.4, chr-F: 0.303
testset: URL, BLEU: 22.2, chr-F: 0.415
testset: URL, BLEU: 48.8, chr-F: 0.683
testset: URL, BLEU: 1.7, chr-F: 0.181
testset: URL, BLEU: 0.3, chr-F: 0.010
testset: URL, BLEU: 0.1, chr-F: 0.005
testset: URL, BLEU: 5.6, chr-F: 0.051
testset: URL, BLEU: 15.0, chr-F: 0.365
testset: URL, BLEU: 19.9, chr-F: 0.409
testset: URL, BLEU: 33.2, chr-F: 0.529
testset: URL, BLEU: 16.1, chr-F: 0.331
testset: URL, BLEU: 5.1, chr-F: 0.240
testset: URL, BLEU: 13.5, chr-F: 0.357
testset: URL, BLEU: 18.0, chr-F: 0.410
testset: URL, BLEU: 42.7, chr-F: 0.646
testset: URL, BLEU: 0.4, chr-F: 0.088
testset: URL, BLEU: 5.6, chr-F: 0.237
testset: URL, BLEU: 0.9, chr-F: 0.157
testset: URL, BLEU: 9.0, chr-F: 0.382
testset: URL, BLEU: 23.7, chr-F: 0.510
testset: URL, BLEU: 22.4, chr-F: 0.477
testset: URL, BLEU: 0.4, chr-F: 0.119
testset: URL, BLEU: 34.1, chr-F: 0.531
testset: URL, BLEU: 29.4, chr-F: 0.416
testset: URL, BLEU: 37.1, chr-F: 0.568
testset: URL, BLEU: 14.0, chr-F: 0.405
testset: URL, BLEU: 15.4, chr-F: 0.390
testset: URL, BLEU: 34.0, chr-F: 0.550
testset: URL, BLEU: 41.1, chr-F: 0.608
testset: URL, BLEU: 8.0, chr-F: 0.353
testset: URL, BLEU: 0.4, chr-F: 0.010
testset: URL, BLEU: 0.2, chr-F: 0.060
testset: URL, BLEU: 0.6, chr-F: 0.122
testset: URL, BLEU: 26.3, chr-F: 0.498
testset: URL, BLEU: 41.6, chr-F: 0.638
testset: URL, BLEU: 0.3, chr-F: 0.095
testset: URL, BLEU: 4.0, chr-F: 0.219
testset: URL, BLEU: 31.9, chr-F: 0.550
testset: URL, BLEU: 0.2, chr-F: 0.013
testset: URL, BLEU: 29.4, chr-F: 0.510
testset: URL, BLEU: 1.6, chr-F: 0.086
testset: URL, BLEU: 16.0, chr-F: 0.111
testset: URL, BLEU: 9.2, chr-F: 0.269
testset: URL, BLEU: 8.4, chr-F: 0.375
testset: URL, BLEU: 39.5, chr-F: 0.572
testset: URL, BLEU: 27.8, chr-F: 0.495
testset: URL, BLEU: 2.9, chr-F: 0.220
testset: URL, BLEU: 10.0, chr-F: 0.296
testset: URL, BLEU: 30.9, chr-F: 0.499
testset: URL, BLEU: 29.9, chr-F: 0.545
testset: URL, BLEU: 24.5, chr-F: 0.484
testset: URL, BLEU: 5.8, chr-F: 0.347
testset: URL, BLEU: 16.7, chr-F: 0.426
testset: URL, BLEU: 8.4, chr-F: 0.370
testset: URL, BLEU: 0.6, chr-F: 0.032
testset: URL, BLEU: 9.3, chr-F: 0.283
testset: URL, BLEU: 0.3, chr-F: 0.126
testset: URL, BLEU: 0.0, chr-F: 0.102
testset: URL, BLEU: 4.0, chr-F: 0.175
testset: URL, BLEU: 13.2, chr-F: 0.398
testset: URL, BLEU: 7.0, chr-F: 0.345
testset: URL, BLEU: 5.0, chr-F: 0.110
testset: URL, BLEU: 63.1, chr-F: 0.831
testset: URL, BLEU: 35.4, chr-F: 0.529
testset: URL, BLEU: 38.5, chr-F: 0.528
testset: URL, BLEU: 32.8, chr-F: 0.380
testset: URL, BLEU: 54.5, chr-F: 0.702
testset: URL, BLEU: 36.7, chr-F: 0.570
testset: URL, BLEU: 32.9, chr-F: 0.541
testset: URL, BLEU: 44.9, chr-F: 0.606
testset: URL, BLEU: 0.0, chr-F: 0.877
testset: URL, BLEU: 43.2, chr-F: 0.605
testset: URL, BLEU: 42.7, chr-F: 0.402
testset: URL, BLEU: 4.8, chr-F: 0.253
testset: URL, BLEU: 39.3, chr-F: 0.591
testset: URL, BLEU: 31.6, chr-F: 0.617
testset: URL, BLEU: 21.2, chr-F: 0.559
testset: URL, BLEU: 33.1, chr-F: 0.548
testset: URL, BLEU: 1.4, chr-F: 0.144
testset: URL, BLEU: 6.6, chr-F: 0.373
testset: URL, BLEU: 4.5, chr-F: 0.453
testset: URL, BLEU: 73.4, chr-F: 0.828
testset: URL, BLEU: 25.5, chr-F: 0.440
testset: URL, BLEU: 0.0, chr-F: 0.124
testset: URL, BLEU: 71.9, chr-F: 0.742
testset: URL, BLEU: 59.5, chr-F: 0.742
testset: URL, BLEU: 25.9, chr-F: 0.497
testset: URL, BLEU: 31.3, chr-F: 0.546
testset: URL, BLEU: 100.0, chr-F: 1.000
testset: URL, BLEU: 28.6, chr-F: 0.495
testset: URL, BLEU: 19.0, chr-F: 0.116
testset: URL, BLEU: 37.1, chr-F: 0.569
testset: URL, BLEU: 13.9, chr-F: 0.336
testset: URL, BLEU: 16.5, chr-F: 0.438
testset: URL, BLEU: 20.1, chr-F: 0.468
testset: URL, BLEU: 8.0, chr-F: 0.316
testset: URL, BLEU: 13.0, chr-F: 0.300
testset: URL, BLEU: 15.3, chr-F: 0.296
testset: URL, BLEU: 0.9, chr-F: 0.199
testset: URL, BLEU: 4.9, chr-F: 0.287
testset: URL, BLEU: 1.9, chr-F: 0.194
testset: URL, BLEU: 45.2, chr-F: 0.574
testset: URL, BLEU: 7.8, chr-F: 0.271
testset: URL, BLEU: 9.6, chr-F: 0.273
testset: URL, BLEU: 0.9, chr-F: 0.102
testset: URL, BLEU: 4.4, chr-F: 0.054
testset: URL, BLEU: 48.3, chr-F: 0.646
testset: URL, BLEU: 1.4, chr-F: 0.034
testset: URL, BLEU: 36.7, chr-F: 0.601
testset: URL, BLEU: 40.4, chr-F: 0.601
testset: URL, BLEU: 33.9, chr-F: 0.538
testset: URL, BLEU: 33.1, chr-F: 0.524
testset: URL, BLEU: 25.8, chr-F: 0.469
testset: URL, BLEU: 34.0, chr-F: 0.543
testset: URL, BLEU: 23.0, chr-F: 0.493
testset: URL, BLEU: 36.1, chr-F: 0.538
testset: URL, BLEU: 3.6, chr-F: 0.400
testset: URL, BLEU: 5.3, chr-F: 0.240
testset: URL, BLEU: 32.0, chr-F: 0.519
testset: URL, BLEU: 13.6, chr-F: 0.318
testset: URL, BLEU: 3.8, chr-F: 0.199
testset: URL, BLEU: 33.4, chr-F: 0.547
testset: URL, BLEU: 32.6, chr-F: 0.546
testset: URL, BLEU: 1.4, chr-F: 0.166
testset: URL, BLEU: 8.0, chr-F: 0.314
testset: URL, BLEU: 10.7, chr-F: 0.520
testset: URL, BLEU: 59.9, chr-F: 0.631
testset: URL, BLEU: 38.0, chr-F: 0.718
testset: URL, BLEU: 2.5, chr-F: 0.213
testset: URL, BLEU: 11.0, chr-F: 0.368
testset: URL, BLEU: 33.0, chr-F: 0.524
testset: URL, BLEU: 40.4, chr-F: 0.574
testset: URL, BLEU: 0.1, chr-F: 0.008
testset: URL, BLEU: 32.7, chr-F: 0.553
testset: URL, BLEU: 26.8, chr-F: 0.496
testset: URL, BLEU: 45.7, chr-F: 0.651
testset: URL, BLEU: 11.8, chr-F: 0.263
testset: URL, BLEU: 31.7, chr-F: 0.528
testset: URL, BLEU: 3.6, chr-F: 0.196
testset: URL, BLEU: 36.7, chr-F: 0.586
testset: URL, BLEU: 17.1, chr-F: 0.451
testset: URL, BLEU: 17.1, chr-F: 0.375
testset: URL, BLEU: 38.1, chr-F: 0.565
testset: URL, BLEU: 0.0, chr-F: 1.000
testset: URL, BLEU: 14.0, chr-F: 0.404
testset: URL, BLEU: 1.5, chr-F: 0.014
testset: URL, BLEU: 68.7, chr-F: 0.695
testset: URL, BLEU: 25.8, chr-F: 0.314
testset: URL, BLEU: 13.6, chr-F: 0.319
testset: URL, BLEU: 48.3, chr-F: 0.680
testset: URL, BLEU: 28.3, chr-F: 0.454
testset: URL, BLEU: 4.4, chr-F: 0.206
testset: URL, BLEU: 8.0, chr-F: 0.282
testset: URL, BLEU: 5.2, chr-F: 0.237
testset: URL, BLEU: 9.9, chr-F: 0.395
testset: URL, BLEU: 35.4, chr-F: 0.868
testset: URL, BLEU: 0.8, chr-F: 0.077
testset: URL, BLEU: 4.9, chr-F: 0.240
testset: URL, BLEU: 11.3, chr-F: 0.054
testset: URL, BLEU: 19.0, chr-F: 0.583
testset: URL, BLEU: 5.4, chr-F: 0.320
testset: URL, BLEU: 6.3, chr-F: 0.239
testset: URL, BLEU: 12.8, chr-F: 0.341
testset: URL, BLEU: 17.5, chr-F: 0.382
testset: URL, BLEU: 42.7, chr-F: 0.797
testset: URL, BLEU: 15.5, chr-F: 0.338
testset: URL, BLEU: 2.3, chr-F: 0.176
testset: URL, BLEU: 4.5, chr-F: 0.207
testset: URL, BLEU: 18.9, chr-F: 0.367
testset: URL, BLEU: 6.0, chr-F: 0.156
testset: URL, BLEU: 32.2, chr-F: 0.448
testset: URL, BLEU: 1.3, chr-F: 0.142
testset: URL, BLEU: 15.3, chr-F: 0.363
testset: URL, BLEU: 3.2, chr-F: 0.166
testset: URL, BLEU: 0.1, chr-F: 0.090
testset: URL, BLEU: 1.8, chr-F: 0.206
testset: URL, BLEU: 27.8, chr-F: 0.560
testset: URL, BLEU: 4.2, chr-F: 0.316
testset: URL, BLEU: 24.6, chr-F: 0.466
testset: URL, BLEU: 24.5, chr-F: 0.431
testset: URL, BLEU: 5.0, chr-F: 0.318
testset: URL, BLEU: 19.0, chr-F: 0.390
testset: URL, BLEU: 15.0, chr-F: 0.258
testset: URL, BLEU: 7.4, chr-F: 0.326
testset: URL, BLEU: 12.3, chr-F: 0.325
testset: URL, BLEU: 14.2, chr-F: 0.324
testset: URL, BLEU: 16.1, chr-F: 0.369
testset: URL, BLEU: 3.2, chr-F: 0.125
testset: URL, BLEU: 55.9, chr-F: 0.672
testset: URL, BLEU: 0.3, chr-F: 0.083
testset: URL, BLEU: 7.2, chr-F: 0.383
testset: URL, BLEU: 0.0, chr-F: 0.102
testset: URL, BLEU: 1.9, chr-F: 0.135
### System Info:
* hf\_name: ine-ine
* source\_languages: ine
* target\_languages: ine
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']
* src\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\_Latn', 'lad\_Latn', 'lat\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\_Latn', 'srd', 'gcf\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\_Latn', 'arg', 'pes\_Thaa', 'sqi', 'csb\_Latn', 'fra', 'hat', 'non\_Latn', 'sco', 'pnb', 'roh', 'bul\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\_Latn', 'hsb', 'tly\_Latn', 'bul', 'bel', 'got\_Goth', 'lat\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\_Latn', 'eng', 'bre', 'nob\_Hebr', 'prg\_Latn', 'ang\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\_Arab', 'san\_Deva', 'gos', 'rus', 'fao', 'orv\_Cyrl', 'bel\_Latn', 'cos', 'zza', 'grc\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\_Cyrl', 'hye\_Latn', 'pdc', 'srp\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\_Latn', 'zlm\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\_Latn', 'jdt\_Cyrl', 'gsw', 'glv', 'nld', 'snd\_Arab', 'kur\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\_Latn', 'ksh', 'pan\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\_Latn', 'ces', 'egl', 'vec', 'max\_Latn', 'pes\_Latn', 'ltg', 'nds'}
* tgt\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\_Latn', 'lad\_Latn', 'lat\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\_Latn', 'srd', 'gcf\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\_Latn', 'arg', 'pes\_Thaa', 'sqi', 'csb\_Latn', 'fra', 'hat', 'non\_Latn', 'sco', 'pnb', 'roh', 'bul\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\_Latn', 'hsb', 'tly\_Latn', 'bul', 'bel', 'got\_Goth', 'lat\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\_Latn', 'eng', 'bre', 'nob\_Hebr', 'prg\_Latn', 'ang\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\_Arab', 'san\_Deva', 'gos', 'rus', 'fao', 'orv\_Cyrl', 'bel\_Latn', 'cos', 'zza', 'grc\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\_Cyrl', 'hye\_Latn', 'pdc', 'srp\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\_Latn', 'zlm\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\_Latn', 'jdt\_Cyrl', 'gsw', 'glv', 'nld', 'snd\_Arab', 'kur\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\_Latn', 'ksh', 'pan\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\_Latn', 'ces', 'egl', 'vec', 'max\_Latn', 'pes\_Latn', 'ltg', 'nds'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ine
* tgt\_alpha3: ine
* short\_pair: ine-ine
* chrF2\_score: 0.509
* bleu: 30.8
* brevity\_penalty: 0.9890000000000001
* ref\_len: 69953.0
* src\_name: Indo-European languages
* tgt\_name: Indo-European languages
* train\_date: 2020-07-27
* src\_alpha2: ine
* tgt\_alpha2: ine
* prefer\_old: False
* long\_pair: ine-ine
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ine-ine\n\n\n* source group: Indo-European languages\n* target group: Indo-European languages\n* OPUS readme: ine-ine\n* model: transformer\n* source language(s): afr afr\\_Arab aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bjn bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell eng enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye hye\\_Latn ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Grek lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pcd pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus rus\\_Latn san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srd srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* target language(s): afr afr\\_Arab aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bjn bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell eng enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye hye\\_Latn ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Grek lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pcd pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus rus\\_Latn san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srd srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: euelections\\_dev2019.URL, BLEU: 19.2, chr-F: 0.482\ntestset: euelections\\_dev2019.URL, BLEU: 15.8, chr-F: 0.470\ntestset: URL, BLEU: 4.0, chr-F: 0.245\ntestset: URL, BLEU: 6.8, chr-F: 0.301\ntestset: URL, BLEU: 17.3, chr-F: 0.470\ntestset: URL, BLEU: 26.0, chr-F: 0.534\ntestset: URL, BLEU: 12.1, chr-F: 0.416\ntestset: URL, BLEU: 15.9, chr-F: 0.443\ntestset: URL, BLEU: 2.5, chr-F: 0.200\ntestset: URL, BLEU: 7.1, chr-F: 0.302\ntestset: URL, BLEU: 10.6, chr-F: 0.407\ntestset: URL, BLEU: 14.9, chr-F: 0.428\ntestset: URL, BLEU: 22.6, chr-F: 0.507\ntestset: URL, BLEU: 23.5, chr-F: 0.495\ntestset: URL, BLEU: 25.1, chr-F: 0.528\ntestset: URL, BLEU: 26.4, chr-F: 0.517\ntestset: URL, BLEU: 13.1, chr-F: 0.432\ntestset: URL, BLEU: 18.4, chr-F: 0.463\ntestset: URL, BLEU: 15.5, chr-F: 0.452\ntestset: URL, BLEU: 14.8, chr-F: 0.458\ntestset: URL, BLEU: 18.4, chr-F: 0.462\ntestset: URL, BLEU: 10.5, chr-F: 0.381\ntestset: URL, BLEU: 19.5, chr-F: 0.467\ntestset: URL, BLEU: 16.4, chr-F: 0.459\ntestset: URL, BLEU: 15.5, chr-F: 0.456\ntestset: URL, BLEU: 18.4, chr-F: 0.466\ntestset: URL, BLEU: 11.9, chr-F: 0.394\ntestset: URL, BLEU: 13.9, chr-F: 0.446\ntestset: URL, BLEU: 20.7, chr-F: 0.502\ntestset: URL, BLEU: 21.3, chr-F: 0.516\ntestset: URL, BLEU: 22.3, chr-F: 0.506\ntestset: URL, BLEU: 11.5, chr-F: 0.390\ntestset: URL, BLEU: 13.4, chr-F: 0.437\ntestset: URL, BLEU: 22.8, chr-F: 0.499\ntestset: URL, BLEU: 22.2, chr-F: 0.533\ntestset: URL, BLEU: 26.2, chr-F: 0.539\ntestset: URL, BLEU: 12.3, chr-F: 0.397\ntestset: URL, BLEU: 13.3, chr-F: 0.436\ntestset: URL, BLEU: 24.7, chr-F: 0.517\ntestset: URL, BLEU: 24.0, chr-F: 0.528\ntestset: URL, BLEU: 26.3, chr-F: 0.537\ntestset: URL, BLEU: 12.0, chr-F: 0.400\ntestset: URL, BLEU: 13.9, chr-F: 0.440\ntestset: URL, BLEU: 22.9, chr-F: 0.509\ntestset: URL, BLEU: 24.2, chr-F: 0.538\ntestset: URL, BLEU: 24.5, chr-F: 0.547\ntestset: URL, BLEU: 12.0, chr-F: 0.422\ntestset: URL, BLEU: 15.1, chr-F: 0.444\ntestset: URL, BLEU: 16.4, chr-F: 0.451\ntestset: URL, BLEU: 9.9, chr-F: 0.369\ntestset: URL, BLEU: 18.0, chr-F: 0.456\ntestset: URL, BLEU: 16.4, chr-F: 0.453\ntestset: URL, BLEU: 17.0, chr-F: 0.452\ntestset: URL, BLEU: 10.5, chr-F: 0.375\ntestset: URL, BLEU: 14.5, chr-F: 0.439\ntestset: URL, BLEU: 18.9, chr-F: 0.481\ntestset: URL, BLEU: 20.9, chr-F: 0.491\ntestset: URL, BLEU: 10.7, chr-F: 0.380\ntestset: URL, BLEU: 13.8, chr-F: 0.435\ntestset: URL, BLEU: 19.8, chr-F: 0.479\ntestset: URL, BLEU: 24.8, chr-F: 0.522\ntestset: URL, BLEU: 11.0, chr-F: 0.380\ntestset: URL, BLEU: 14.0, chr-F: 0.433\ntestset: URL, BLEU: 20.6, chr-F: 0.488\ntestset: URL, BLEU: 23.3, chr-F: 0.518\ntestset: URL, BLEU: 12.9, chr-F: 0.427\ntestset: URL, BLEU: 17.0, chr-F: 0.456\ntestset: URL, BLEU: 15.4, chr-F: 0.447\ntestset: URL, BLEU: 14.9, chr-F: 0.454\ntestset: URL, BLEU: 17.1, chr-F: 0.458\ntestset: URL, BLEU: 10.3, chr-F: 0.370\ntestset: URL, BLEU: 17.7, chr-F: 0.458\ntestset: URL, BLEU: 15.9, chr-F: 0.447\ntestset: URL, BLEU: 14.7, chr-F: 0.446\ntestset: URL, BLEU: 17.2, chr-F: 0.453\ntestset: URL, BLEU: 11.0, chr-F: 0.387\ntestset: URL, BLEU: 13.6, chr-F: 0.440\ntestset: URL, BLEU: 20.3, chr-F: 0.496\ntestset: URL, BLEU: 20.8, chr-F: 0.509\ntestset: URL, BLEU: 21.9, chr-F: 0.503\ntestset: URL, BLEU: 11.3, chr-F: 0.385\ntestset: URL, BLEU: 14.0, chr-F: 0.436\ntestset: URL, BLEU: 21.8, chr-F: 0.496\ntestset: URL, BLEU: 22.1, chr-F: 0.526\ntestset: URL, BLEU: 24.8, chr-F: 0.525\ntestset: URL, BLEU: 11.5, chr-F: 0.382\ntestset: URL, BLEU: 13.3, chr-F: 0.430\ntestset: URL, BLEU: 23.6, chr-F: 0.508\ntestset: URL, BLEU: 22.9, chr-F: 0.516\ntestset: URL, BLEU: 25.4, chr-F: 0.529\ntestset: URL, BLEU: 11.3, chr-F: 0.386\ntestset: URL, BLEU: 13.5, chr-F: 0.434\ntestset: URL, BLEU: 22.4, chr-F: 0.500\ntestset: URL, BLEU: 23.2, chr-F: 0.520\ntestset: URL, BLEU: 24.0, chr-F: 0.538\ntestset: URL, BLEU: 13.1, chr-F: 0.431\ntestset: URL, BLEU: 16.9, chr-F: 0.459\ntestset: URL, BLEU: 15.6, chr-F: 0.450\ntestset: URL, BLEU: 18.5, chr-F: 0.467\ntestset: URL, BLEU: 11.4, chr-F: 0.387\ntestset: URL, BLEU: 19.6, chr-F: 0.481\ntestset: URL, BLEU: 17.7, chr-F: 0.471\ntestset: URL, BLEU: 20.0, chr-F: 0.478\ntestset: URL, BLEU: 11.4, chr-F: 0.393\ntestset: URL, BLEU: 15.1, chr-F: 0.448\ntestset: URL, BLEU: 21.4, chr-F: 0.506\ntestset: URL, BLEU: 25.0, chr-F: 0.525\ntestset: URL, BLEU: 11.1, chr-F: 0.386\ntestset: URL, BLEU: 14.2, chr-F: 0.442\ntestset: URL, BLEU: 22.6, chr-F: 0.507\ntestset: URL, BLEU: 26.6, chr-F: 0.542\ntestset: URL, BLEU: 12.2, chr-F: 0.396\ntestset: URL, BLEU: 15.1, chr-F: 0.445\ntestset: URL, BLEU: 24.3, chr-F: 0.521\ntestset: URL, BLEU: 24.8, chr-F: 0.536\ntestset: URL, BLEU: 13.1, chr-F: 0.423\ntestset: URL, BLEU: 18.2, chr-F: 0.463\ntestset: URL, BLEU: 17.4, chr-F: 0.458\ntestset: URL, BLEU: 18.9, chr-F: 0.464\ntestset: URL, BLEU: 11.2, chr-F: 0.376\ntestset: URL, BLEU: 18.3, chr-F: 0.464\ntestset: URL, BLEU: 17.0, chr-F: 0.457\ntestset: URL, BLEU: 19.2, chr-F: 0.464\ntestset: URL, BLEU: 12.4, chr-F: 0.395\ntestset: URL, BLEU: 14.5, chr-F: 0.437\ntestset: URL, BLEU: 23.6, chr-F: 0.522\ntestset: URL, BLEU: 26.6, chr-F: 0.530\ntestset: URL, BLEU: 12.5, chr-F: 0.394\ntestset: URL, BLEU: 14.2, chr-F: 0.433\ntestset: URL, BLEU: 24.3, chr-F: 0.521\ntestset: URL, BLEU: 29.1, chr-F: 0.551\ntestset: URL, BLEU: 12.3, chr-F: 0.390\ntestset: URL, BLEU: 14.4, chr-F: 0.435\ntestset: URL, BLEU: 25.0, chr-F: 0.521\ntestset: URL, BLEU: 25.6, chr-F: 0.537\ntestset: URL, BLEU: 13.1, chr-F: 0.420\ntestset: URL, BLEU: 17.5, chr-F: 0.457\ntestset: URL, BLEU: 16.8, chr-F: 0.452\ntestset: URL, BLEU: 11.2, chr-F: 0.379\ntestset: URL, BLEU: 18.1, chr-F: 0.457\ntestset: URL, BLEU: 11.2, chr-F: 0.368\ntestset: URL, BLEU: 19.4, chr-F: 0.472\ntestset: URL, BLEU: 17.7, chr-F: 0.464\ntestset: URL, BLEU: 10.3, chr-F: 0.370\ntestset: URL, BLEU: 19.6, chr-F: 0.467\ntestset: URL, BLEU: 11.1, chr-F: 0.375\ntestset: URL, BLEU: 14.6, chr-F: 0.440\ntestset: URL, BLEU: 22.4, chr-F: 0.512\ntestset: URL, BLEU: 17.6, chr-F: 0.452\ntestset: URL, BLEU: 26.5, chr-F: 0.527\ntestset: URL, BLEU: 11.9, chr-F: 0.383\ntestset: URL, BLEU: 14.6, chr-F: 0.437\ntestset: URL, BLEU: 24.3, chr-F: 0.516\ntestset: URL, BLEU: 11.9, chr-F: 0.393\ntestset: URL, BLEU: 28.3, chr-F: 0.545\ntestset: URL, BLEU: 9.0, chr-F: 0.340\ntestset: URL, BLEU: 10.0, chr-F: 0.383\ntestset: URL, BLEU: 22.4, chr-F: 0.492\ntestset: URL, BLEU: 13.3, chr-F: 0.427\ntestset: URL, BLEU: 16.6, chr-F: 0.437\ntestset: URL, BLEU: 11.9, chr-F: 0.381\ntestset: URL, BLEU: 14.8, chr-F: 0.440\ntestset: URL, BLEU: 26.5, chr-F: 0.534\ntestset: URL, BLEU: 25.0, chr-F: 0.539\ntestset: URL, BLEU: 12.4, chr-F: 0.401\ntestset: URL, BLEU: 14.3, chr-F: 0.434\ntestset: URL, BLEU: 18.5, chr-F: 0.463\ntestset: URL, BLEU: 16.6, chr-F: 0.444\ntestset: URL, BLEU: 13.6, chr-F: 0.406\ntestset: URL, BLEU: 18.2, chr-F: 0.455\ntestset: URL, BLEU: 11.7, chr-F: 0.380\ntestset: URL, BLEU: 20.9, chr-F: 0.481\ntestset: URL, BLEU: 18.1, chr-F: 0.460\ntestset: URL, BLEU: 11.7, chr-F: 0.384\ntestset: URL, BLEU: 19.4, chr-F: 0.463\ntestset: URL, BLEU: 12.7, chr-F: 0.394\ntestset: URL, BLEU: 16.7, chr-F: 0.455\ntestset: URL, BLEU: 22.7, chr-F: 0.499\ntestset: URL, BLEU: 13.3, chr-F: 0.408\ntestset: URL, BLEU: 23.6, chr-F: 0.506\ntestset: URL, BLEU: 11.8, chr-F: 0.379\ntestset: URL, BLEU: 15.6, chr-F: 0.446\ntestset: URL, BLEU: 23.6, chr-F: 0.506\ntestset: URL, BLEU: 12.9, chr-F: 0.399\ntestset: URL, BLEU: 25.3, chr-F: 0.519\ntestset: URL, BLEU: 11.6, chr-F: 0.376\ntestset: URL, BLEU: 12.4, chr-F: 0.410\ntestset: URL, BLEU: 17.8, chr-F: 0.448\ntestset: URL, BLEU: 14.8, chr-F: 0.434\ntestset: URL, BLEU: 17.9, chr-F: 0.446\ntestset: URL, BLEU: 12.5, chr-F: 0.391\ntestset: URL, BLEU: 15.9, chr-F: 0.449\ntestset: URL, BLEU: 24.0, chr-F: 0.518\ntestset: URL, BLEU: 24.3, chr-F: 0.522\ntestset: URL, BLEU: 13.9, chr-F: 0.411\ntestset: URL, BLEU: 19.0, chr-F: 0.475\ntestset: URL, BLEU: 19.2, chr-F: 0.468\ntestset: URL, BLEU: 23.9, chr-F: 0.521\ntestset: URL, BLEU: 5.9, chr-F: 0.268\ntestset: URL, BLEU: 8.8, chr-F: 0.348\ntestset: URL, BLEU: 19.1, chr-F: 0.475\ntestset: URL, BLEU: 17.9, chr-F: 0.450\ntestset: URL, BLEU: 12.1, chr-F: 0.392\ntestset: URL, BLEU: 21.1, chr-F: 0.480\ntestset: URL, BLEU: 18.7, chr-F: 0.475\ntestset: URL, BLEU: 15.4, chr-F: 0.431\ntestset: URL, BLEU: 18.1, chr-F: 0.454\ntestset: URL, BLEU: 18.6, chr-F: 0.465\ntestset: URL, BLEU: 13.3, chr-F: 0.403\ntestset: URL, BLEU: 24.0, chr-F: 0.508\ntestset: URL, BLEU: 21.4, chr-F: 0.494\ntestset: URL, BLEU: 16.8, chr-F: 0.457\ntestset: URL, BLEU: 24.9, chr-F: 0.522\ntestset: URL, BLEU: 13.7, chr-F: 0.417\ntestset: URL, BLEU: 17.3, chr-F: 0.453\ntestset: URL, BLEU: 16.7, chr-F: 0.444\ntestset: URL, BLEU: 10.9, chr-F: 0.375\ntestset: URL, BLEU: 21.5, chr-F: 0.484\ntestset: URL, BLEU: 17.5, chr-F: 0.464\ntestset: URL, BLEU: 9.1, chr-F: 0.388\ntestset: URL, BLEU: 11.5, chr-F: 0.404\ntestset: URL, BLEU: 14.8, chr-F: 0.432\ntestset: URL, BLEU: 19.3, chr-F: 0.467\ntestset: URL, BLEU: 17.1, chr-F: 0.450\ntestset: URL, BLEU: 10.9, chr-F: 0.380\ntestset: URL, BLEU: 26.0, chr-F: 0.518\ntestset: URL, BLEU: 24.3, chr-F: 0.514\ntestset: URL, BLEU: 12.5, chr-F: 0.417\ntestset: URL, BLEU: 16.4, chr-F: 0.443\ntestset: URL, BLEU: 13.9, chr-F: 0.432\ntestset: URL, BLEU: 11.7, chr-F: 0.383\ntestset: URL, BLEU: 22.2, chr-F: 0.483\ntestset: URL, BLEU: 20.1, chr-F: 0.496\ntestset: URL, BLEU: 12.3, chr-F: 0.389\ntestset: URL, BLEU: 22.0, chr-F: 0.497\ntestset: URL, BLEU: 3.1, chr-F: 0.208\ntestset: URL, BLEU: 7.8, chr-F: 0.369\ntestset: URL, BLEU: 14.6, chr-F: 0.408\ntestset: URL, BLEU: 16.4, chr-F: 0.483\ntestset: URL, BLEU: 6.1, chr-F: 0.288\ntestset: URL, BLEU: 16.9, chr-F: 0.456\ntestset: URL, BLEU: 20.2, chr-F: 0.468\ntestset: URL, BLEU: 16.0, chr-F: 0.152\ntestset: URL, BLEU: 10.2, chr-F: 0.333\ntestset: URL, BLEU: 32.6, chr-F: 0.651\ntestset: URL, BLEU: 34.5, chr-F: 0.556\ntestset: URL, BLEU: 48.1, chr-F: 0.638\ntestset: URL, BLEU: 10.2, chr-F: 0.416\ntestset: URL, BLEU: 41.9, chr-F: 0.612\ntestset: URL, BLEU: 0.0, chr-F: 0.112\ntestset: URL, BLEU: 0.3, chr-F: 0.068\ntestset: URL, BLEU: 12.2, chr-F: 0.419\ntestset: URL, BLEU: 48.7, chr-F: 0.637\ntestset: URL, BLEU: 8.4, chr-F: 0.407\ntestset: URL, BLEU: 19.0, chr-F: 0.357\ntestset: URL, BLEU: 0.0, chr-F: 0.238\ntestset: URL, BLEU: 1.4, chr-F: 0.080\ntestset: URL, BLEU: 45.7, chr-F: 0.643\ntestset: URL, BLEU: 55.3, chr-F: 0.687\ntestset: URL, BLEU: 39.3, chr-F: 0.563\ntestset: URL, BLEU: 33.9, chr-F: 0.586\ntestset: URL, BLEU: 22.6, chr-F: 0.475\ntestset: URL, BLEU: 32.1, chr-F: 0.525\ntestset: URL, BLEU: 44.1, chr-F: 0.611\ntestset: URL, BLEU: 71.6, chr-F: 0.814\ntestset: URL, BLEU: 31.0, chr-F: 0.481\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 0.0, chr-F: 0.133\ntestset: URL, BLEU: 5.5, chr-F: 0.129\ntestset: URL, BLEU: 22.2, chr-F: 0.345\ntestset: URL, BLEU: 6.3, chr-F: 0.251\ntestset: URL, BLEU: 7.9, chr-F: 0.255\ntestset: URL, BLEU: 0.8, chr-F: 0.133\ntestset: URL, BLEU: 16.0, chr-F: 0.086\ntestset: URL, BLEU: 6.0, chr-F: 0.185\ntestset: URL, BLEU: 0.6, chr-F: 0.000\ntestset: URL, BLEU: 16.0, chr-F: 0.102\ntestset: URL, BLEU: 13.2, chr-F: 0.301\ntestset: URL, BLEU: 7.6, chr-F: 0.062\ntestset: URL, BLEU: 0.2, chr-F: 0.025\ntestset: URL, BLEU: 6.6, chr-F: 0.198\ntestset: URL, BLEU: 5.5, chr-F: 0.121\ntestset: URL, BLEU: 11.4, chr-F: 0.498\ntestset: URL, BLEU: 2.4, chr-F: 0.103\ntestset: URL, BLEU: 8.1, chr-F: 0.249\ntestset: URL, BLEU: 16.4, chr-F: 0.195\ntestset: URL, BLEU: 1.1, chr-F: 0.117\ntestset: URL, BLEU: 28.2, chr-F: 0.394\ntestset: URL, BLEU: 39.8, chr-F: 0.445\ntestset: URL, BLEU: 52.3, chr-F: 0.608\ntestset: URL, BLEU: 8.6, chr-F: 0.261\ntestset: URL, BLEU: 19.2, chr-F: 0.629\ntestset: URL, BLEU: 18.2, chr-F: 0.369\ntestset: URL, BLEU: 4.3, chr-F: 0.145\ntestset: URL, BLEU: 4.5, chr-F: 0.366\ntestset: URL, BLEU: 12.1, chr-F: 0.310\ntestset: URL, BLEU: 8.1, chr-F: 0.050\ntestset: URL, BLEU: 30.1, chr-F: 0.463\ntestset: URL, BLEU: 27.6, chr-F: 0.441\ntestset: URL, BLEU: 29.4, chr-F: 0.501\ntestset: URL, BLEU: 2.6, chr-F: 0.030\ntestset: URL, BLEU: 10.0, chr-F: 0.280\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 35.9, chr-F: 0.682\ntestset: URL, BLEU: 41.7, chr-F: 0.601\ntestset: URL, BLEU: 2.4, chr-F: 0.201\ntestset: URL, BLEU: 53.7, chr-F: 0.808\ntestset: URL, BLEU: 27.6, chr-F: 0.483\ntestset: URL, BLEU: 32.6, chr-F: 0.449\ntestset: URL, BLEU: 29.1, chr-F: 0.506\ntestset: URL, BLEU: 29.5, chr-F: 0.522\ntestset: URL, BLEU: 31.8, chr-F: 0.512\ntestset: URL, BLEU: 30.9, chr-F: 0.527\ntestset: URL, BLEU: 39.3, chr-F: 0.608\ntestset: URL, BLEU: 32.8, chr-F: 0.540\ntestset: URL, BLEU: 12.7, chr-F: 0.178\ntestset: URL, BLEU: 4.5, chr-F: 0.185\ntestset: URL, BLEU: 3.7, chr-F: 0.251\ntestset: URL, BLEU: 19.3, chr-F: 0.531\ntestset: URL, BLEU: 1.0, chr-F: 0.147\ntestset: URL, BLEU: 27.1, chr-F: 0.481\ntestset: URL, BLEU: 37.0, chr-F: 0.494\ntestset: URL, BLEU: 34.8, chr-F: 0.565\ntestset: URL, BLEU: 21.7, chr-F: 0.401\ntestset: URL, BLEU: 42.3, chr-F: 0.643\ntestset: URL, BLEU: 28.2, chr-F: 0.534\ntestset: URL, BLEU: 41.6, chr-F: 0.643\ntestset: URL, BLEU: 2.9, chr-F: 0.254\ntestset: URL, BLEU: 34.6, chr-F: 0.408\ntestset: URL, BLEU: 26.5, chr-F: 0.430\ntestset: URL, BLEU: 21.6, chr-F: 0.466\ntestset: URL, BLEU: 26.8, chr-F: 0.424\ntestset: URL, BLEU: 28.9, chr-F: 0.473\ntestset: URL, BLEU: 21.0, chr-F: 0.384\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 2.2, chr-F: 0.178\ntestset: URL, BLEU: 7.7, chr-F: 0.296\ntestset: URL, BLEU: 13.6, chr-F: 0.309\ntestset: URL, BLEU: 8.6, chr-F: 0.251\ntestset: URL, BLEU: 12.2, chr-F: 0.272\ntestset: URL, BLEU: 0.9, chr-F: 0.081\ntestset: URL, BLEU: 3.0, chr-F: 0.217\ntestset: URL, BLEU: 1.4, chr-F: 0.158\ntestset: URL, BLEU: 14.1, chr-F: 0.582\ntestset: URL, BLEU: 52.8, chr-F: 0.725\ntestset: URL, BLEU: 66.9, chr-F: 0.951\ntestset: URL, BLEU: 31.2, chr-F: 0.530\ntestset: URL, BLEU: 29.1, chr-F: 0.497\ntestset: URL, BLEU: 36.5, chr-F: 0.547\ntestset: URL, BLEU: 5.3, chr-F: 0.299\ntestset: URL, BLEU: 8.9, chr-F: 0.511\ntestset: URL, BLEU: 36.1, chr-F: 0.558\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 24.5, chr-F: 0.479\ntestset: URL, BLEU: 8.1, chr-F: 0.302\ntestset: URL, BLEU: 13.4, chr-F: 0.337\ntestset: URL, BLEU: 38.2, chr-F: 0.811\ntestset: URL, BLEU: 15.0, chr-F: 0.431\ntestset: URL, BLEU: 31.8, chr-F: 0.505\ntestset: URL, BLEU: 66.9, chr-F: 0.951\ntestset: URL, BLEU: 24.4, chr-F: 0.461\ntestset: URL, BLEU: 29.2, chr-F: 0.484\ntestset: URL, BLEU: 42.7, chr-F: 0.776\ntestset: URL, BLEU: 28.7, chr-F: 0.522\ntestset: URL, BLEU: 32.1, chr-F: 0.520\ntestset: URL, BLEU: 66.9, chr-F: 0.611\ntestset: URL, BLEU: 34.3, chr-F: 0.567\ntestset: URL, BLEU: 13.7, chr-F: 0.163\ntestset: URL, BLEU: 31.0, chr-F: 0.523\ntestset: URL, BLEU: 17.0, chr-F: 0.423\ntestset: URL, BLEU: 39.4, chr-F: 0.582\ntestset: URL, BLEU: 5.3, chr-F: 0.370\ntestset: URL, BLEU: 16.0, chr-F: 0.301\ntestset: URL, BLEU: 41.0, chr-F: 0.606\ntestset: URL, BLEU: 39.8, chr-F: 0.626\ntestset: URL, BLEU: 35.9, chr-F: 0.555\ntestset: URL, BLEU: 23.0, chr-F: 0.456\ntestset: URL, BLEU: 38.9, chr-F: 0.618\ntestset: URL, BLEU: 16.0, chr-F: 0.311\ntestset: URL, BLEU: 28.8, chr-F: 0.507\ntestset: URL, BLEU: 55.2, chr-F: 0.731\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 30.8, chr-F: 0.512\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 17.0, chr-F: 0.426\ntestset: URL, BLEU: 3.3, chr-F: 0.165\ntestset: URL, BLEU: 23.3, chr-F: 0.466\ntestset: URL, BLEU: 0.7, chr-F: 0.126\ntestset: URL, BLEU: 45.2, chr-F: 0.690\ntestset: URL, BLEU: 3.4, chr-F: 0.072\ntestset: URL, BLEU: 12.7, chr-F: 0.706\ntestset: URL, BLEU: 32.2, chr-F: 0.526\ntestset: URL, BLEU: 24.4, chr-F: 0.422\ntestset: URL, BLEU: 33.8, chr-F: 0.529\ntestset: URL, BLEU: 1.7, chr-F: 0.157\ntestset: URL, BLEU: 3.7, chr-F: 0.252\ntestset: URL, BLEU: 20.1, chr-F: 0.229\ntestset: URL, BLEU: 36.9, chr-F: 0.564\ntestset: URL, BLEU: 7.7, chr-F: 0.338\ntestset: URL, BLEU: 0.6, chr-F: 0.011\ntestset: URL, BLEU: 39.7, chr-F: 0.580\ntestset: URL, BLEU: 7.0, chr-F: 0.230\ntestset: URL, BLEU: 28.2, chr-F: 0.516\ntestset: URL, BLEU: 1.7, chr-F: 0.303\ntestset: URL, BLEU: 6.5, chr-F: 0.304\ntestset: URL, BLEU: 6.6, chr-F: 0.202\ntestset: URL, BLEU: 31.4, chr-F: 0.586\ntestset: URL, BLEU: 6.4, chr-F: 0.312\ntestset: URL, BLEU: 19.9, chr-F: 0.468\ntestset: URL, BLEU: 35.1, chr-F: 0.535\ntestset: URL, BLEU: 41.7, chr-F: 0.610\ntestset: URL, BLEU: 30.5, chr-F: 0.530\ntestset: URL, BLEU: 33.0, chr-F: 0.533\ntestset: URL, BLEU: 9.9, chr-F: 0.406\ntestset: URL, BLEU: 36.9, chr-F: 0.564\ntestset: URL, BLEU: 4.1, chr-F: 0.236\ntestset: URL, BLEU: 33.3, chr-F: 0.531\ntestset: URL, BLEU: 51.4, chr-F: 0.586\ntestset: URL, BLEU: 4.8, chr-F: 0.118\ntestset: URL, BLEU: 34.6, chr-F: 0.522\ntestset: URL, BLEU: 2.1, chr-F: 0.252\ntestset: URL, BLEU: 8.9, chr-F: 0.233\ntestset: URL, BLEU: 6.7, chr-F: 0.205\ntestset: URL, BLEU: 4.8, chr-F: 0.211\ntestset: URL, BLEU: 3.4, chr-F: 0.182\ntestset: URL, BLEU: 4.4, chr-F: 0.193\ntestset: URL, BLEU: 5.0, chr-F: 0.221\ntestset: URL, BLEU: 6.6, chr-F: 0.211\ntestset: URL, BLEU: 9.3, chr-F: 0.221\ntestset: URL, BLEU: 19.6, chr-F: 0.282\ntestset: URL, BLEU: 2.9, chr-F: 0.171\ntestset: URL, BLEU: 4.3, chr-F: 0.187\ntestset: URL, BLEU: 2.4, chr-F: 0.154\ntestset: URL, BLEU: 3.6, chr-F: 0.187\ntestset: URL, BLEU: 0.0, chr-F: 0.877\ntestset: URL, BLEU: 39.2, chr-F: 0.473\ntestset: URL, BLEU: 19.0, chr-F: 0.352\ntestset: URL, BLEU: 1.6, chr-F: 0.066\ntestset: URL, BLEU: 17.5, chr-F: 0.336\ntestset: URL, BLEU: 14.0, chr-F: 0.347\ntestset: URL, BLEU: 3.8, chr-F: 0.278\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 0.0, chr-F: 0.014\ntestset: URL, BLEU: 32.6, chr-F: 0.507\ntestset: URL, BLEU: 33.1, chr-F: 0.496\ntestset: URL, BLEU: 27.0, chr-F: 0.447\ntestset: URL, BLEU: 5.7, chr-F: 0.223\ntestset: URL, BLEU: 13.1, chr-F: 0.380\ntestset: URL, BLEU: 5.3, chr-F: 0.186\ntestset: URL, BLEU: 28.3, chr-F: 0.498\ntestset: URL, BLEU: 3.7, chr-F: 0.185\ntestset: URL, BLEU: 8.0, chr-F: 0.067\ntestset: URL, BLEU: 37.5, chr-F: 0.603\ntestset: URL, BLEU: 37.8, chr-F: 0.488\ntestset: URL, BLEU: 32.1, chr-F: 0.480\ntestset: URL, BLEU: 31.6, chr-F: 0.523\ntestset: URL, BLEU: 4.8, chr-F: 0.072\ntestset: URL, BLEU: 40.5, chr-F: 0.774\ntestset: URL, BLEU: 1.2, chr-F: 0.066\ntestset: URL, BLEU: 13.1, chr-F: 0.156\ntestset: URL, BLEU: 27.2, chr-F: 0.746\ntestset: URL, BLEU: 35.4, chr-F: 0.529\ntestset: URL, BLEU: 19.0, chr-F: 0.349\ntestset: URL, BLEU: 35.8, chr-F: 0.582\ntestset: URL, BLEU: 19.0, chr-F: 0.337\ntestset: URL, BLEU: 43.4, chr-F: 0.609\ntestset: URL, BLEU: 18.1, chr-F: 0.515\ntestset: URL, BLEU: 9.7, chr-F: 0.162\ntestset: URL, BLEU: 14.1, chr-F: 0.410\ntestset: URL, BLEU: 47.0, chr-F: 0.640\ntestset: URL, BLEU: 2.6, chr-F: 0.195\ntestset: URL, BLEU: 12.2, chr-F: 0.344\ntestset: URL, BLEU: 36.3, chr-F: 0.589\ntestset: URL, BLEU: 3.5, chr-F: 0.270\ntestset: URL, BLEU: 0.4, chr-F: 0.096\ntestset: URL, BLEU: 3.9, chr-F: 0.376\ntestset: URL, BLEU: 68.7, chr-F: 0.786\ntestset: URL, BLEU: 71.4, chr-F: 0.554\ntestset: URL, BLEU: 3.7, chr-F: 0.220\ntestset: URL, BLEU: 4.9, chr-F: 0.219\ntestset: URL, BLEU: 47.2, chr-F: 0.650\ntestset: URL, BLEU: 58.8, chr-F: 0.749\ntestset: URL, BLEU: 27.1, chr-F: 0.527\ntestset: URL, BLEU: 41.5, chr-F: 0.616\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 30.8, chr-F: 0.518\ntestset: URL, BLEU: 36.6, chr-F: 0.578\ntestset: URL, BLEU: 53.8, chr-F: 0.696\ntestset: URL, BLEU: 4.8, chr-F: 0.184\ntestset: URL, BLEU: 15.9, chr-F: 0.489\ntestset: URL, BLEU: 21.7, chr-F: 0.544\ntestset: URL, BLEU: 13.0, chr-F: 0.252\ntestset: URL, BLEU: 37.5, chr-F: 0.566\ntestset: URL, BLEU: 0.6, chr-F: 0.131\ntestset: URL, BLEU: 20.0, chr-F: 0.580\ntestset: URL, BLEU: 16.5, chr-F: 0.389\ntestset: URL, BLEU: 19.6, chr-F: 0.450\ntestset: URL, BLEU: 34.5, chr-F: 0.319\ntestset: URL, BLEU: 3.2, chr-F: 0.196\ntestset: URL, BLEU: 32.6, chr-F: 0.517\ntestset: URL, BLEU: 28.4, chr-F: 0.503\ntestset: URL, BLEU: 24.3, chr-F: 0.465\ntestset: URL, BLEU: 0.2, chr-F: 0.043\ntestset: URL, BLEU: 2.4, chr-F: 0.020\ntestset: URL, BLEU: 4.4, chr-F: 0.178\ntestset: URL, BLEU: 11.3, chr-F: 0.378\ntestset: URL, BLEU: 37.8, chr-F: 0.579\ntestset: URL, BLEU: 0.1, chr-F: 0.082\ntestset: URL, BLEU: 3.3, chr-F: 0.050\ntestset: URL, BLEU: 27.1, chr-F: 0.485\ntestset: URL, BLEU: 34.7, chr-F: 0.539\ntestset: URL, BLEU: 6.7, chr-F: 0.331\ntestset: URL, BLEU: 4.5, chr-F: 0.235\ntestset: URL, BLEU: 31.9, chr-F: 0.527\ntestset: URL, BLEU: 0.2, chr-F: 0.101\ntestset: URL, BLEU: 13.7, chr-F: 0.358\ntestset: URL, BLEU: 7.2, chr-F: 0.304\ntestset: URL, BLEU: 8.9, chr-F: 0.349\ntestset: URL, BLEU: 28.9, chr-F: 0.513\ntestset: URL, BLEU: 0.7, chr-F: 0.157\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.1, chr-F: 0.005\ntestset: URL, BLEU: 0.2, chr-F: 0.073\ntestset: URL, BLEU: 23.2, chr-F: 0.470\ntestset: URL, BLEU: 12.5, chr-F: 0.367\ntestset: URL, BLEU: 5.4, chr-F: 0.249\ntestset: URL, BLEU: 12.9, chr-F: 0.263\ntestset: URL, BLEU: 16.5, chr-F: 0.395\ntestset: URL, BLEU: 29.2, chr-F: 0.536\ntestset: URL, BLEU: 0.6, chr-F: 0.092\ntestset: URL, BLEU: 11.2, chr-F: 0.183\ntestset: URL, BLEU: 0.3, chr-F: 0.112\ntestset: URL, BLEU: 6.4, chr-F: 0.301\ntestset: URL, BLEU: 29.6, chr-F: 0.502\ntestset: URL, BLEU: 17.4, chr-F: 0.445\ntestset: URL, BLEU: 18.5, chr-F: 0.380\ntestset: URL, BLEU: 7.9, chr-F: 0.245\ntestset: URL, BLEU: 21.9, chr-F: 0.449\ntestset: URL, BLEU: 21.9, chr-F: 0.478\ntestset: URL, BLEU: 13.6, chr-F: 0.391\ntestset: URL, BLEU: 37.2, chr-F: 0.574\ntestset: URL, BLEU: 34.5, chr-F: 0.562\ntestset: URL, BLEU: 4.7, chr-F: 0.261\ntestset: URL, BLEU: 0.2, chr-F: 0.006\ntestset: URL, BLEU: 0.6, chr-F: 0.064\ntestset: URL, BLEU: 0.2, chr-F: 0.064\ntestset: URL, BLEU: 23.6, chr-F: 0.477\ntestset: URL, BLEU: 25.1, chr-F: 0.480\ntestset: URL, BLEU: 0.2, chr-F: 0.070\ntestset: URL, BLEU: 0.2, chr-F: 0.059\ntestset: URL, BLEU: 5.2, chr-F: 0.179\ntestset: URL, BLEU: 25.7, chr-F: 0.484\ntestset: URL, BLEU: 27.1, chr-F: 0.494\ntestset: URL, BLEU: 1.6, chr-F: 0.076\ntestset: URL, BLEU: 10.8, chr-F: 0.281\ntestset: URL, BLEU: 8.1, chr-F: 0.251\ntestset: URL, BLEU: 31.5, chr-F: 0.534\ntestset: URL, BLEU: 0.6, chr-F: 0.144\ntestset: URL, BLEU: 39.1, chr-F: 0.572\ntestset: URL, BLEU: 0.1, chr-F: 0.088\ntestset: URL, BLEU: 13.1, chr-F: 0.406\ntestset: URL, BLEU: 27.2, chr-F: 0.489\ntestset: URL, BLEU: 13.4, chr-F: 0.350\ntestset: URL, BLEU: 6.0, chr-F: 0.262\ntestset: URL, BLEU: 14.1, chr-F: 0.366\ntestset: URL, BLEU: 19.0, chr-F: 0.424\ntestset: URL, BLEU: 15.4, chr-F: 0.342\ntestset: URL, BLEU: 15.2, chr-F: 0.315\ntestset: URL, BLEU: 35.4, chr-F: 0.394\ntestset: URL, BLEU: 12.6, chr-F: 0.401\ntestset: URL, BLEU: 2.9, chr-F: 0.168\ntestset: URL, BLEU: 5.2, chr-F: 0.207\ntestset: URL, BLEU: 6.4, chr-F: 0.215\ntestset: URL, BLEU: 1.6, chr-F: 0.180\ntestset: URL, BLEU: 3.9, chr-F: 0.199\ntestset: URL, BLEU: 26.6, chr-F: 0.483\ntestset: URL, BLEU: 20.2, chr-F: 0.398\ntestset: URL, BLEU: 12.1, chr-F: 0.380\ntestset: URL, BLEU: 0.7, chr-F: 0.039\ntestset: URL, BLEU: 53.7, chr-F: 0.513\ntestset: URL, BLEU: 30.5, chr-F: 0.503\ntestset: URL, BLEU: 43.1, chr-F: 0.589\ntestset: URL, BLEU: 12.7, chr-F: 0.541\ntestset: URL, BLEU: 5.3, chr-F: 0.210\ntestset: URL, BLEU: 39.5, chr-F: 0.563\ntestset: URL, BLEU: 11.6, chr-F: 0.343\ntestset: URL, BLEU: 30.9, chr-F: 0.524\ntestset: URL, BLEU: 57.6, chr-F: 0.572\ntestset: URL, BLEU: 4.9, chr-F: 0.244\ntestset: URL, BLEU: 38.0, chr-F: 0.562\ntestset: URL, BLEU: 40.8, chr-F: 0.615\ntestset: URL, BLEU: 72.6, chr-F: 0.846\ntestset: URL, BLEU: 26.8, chr-F: 0.514\ntestset: URL, BLEU: 27.1, chr-F: 0.493\ntestset: URL, BLEU: 30.8, chr-F: 0.512\ntestset: URL, BLEU: 30.8, chr-F: 0.475\ntestset: URL, BLEU: 36.0, chr-F: 0.521\ntestset: URL, BLEU: 12.6, chr-F: 0.364\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 46.1, chr-F: 0.633\ntestset: URL, BLEU: 5.1, chr-F: 0.136\ntestset: URL, BLEU: 5.1, chr-F: 0.199\ntestset: URL, BLEU: 0.8, chr-F: 0.208\ntestset: URL, BLEU: 16.8, chr-F: 0.380\ntestset: URL, BLEU: 0.2, chr-F: 0.002\ntestset: URL, BLEU: 16.6, chr-F: 0.415\ntestset: URL, BLEU: 7.0, chr-F: 0.321\ntestset: URL, BLEU: 0.2, chr-F: 0.003\ntestset: URL, BLEU: 6.6, chr-F: 0.251\ntestset: URL, BLEU: 31.5, chr-F: 0.513\ntestset: URL, BLEU: 33.5, chr-F: 0.550\ntestset: URL, BLEU: 25.6, chr-F: 0.466\ntestset: URL, BLEU: 0.1, chr-F: 0.035\ntestset: URL, BLEU: 0.8, chr-F: 0.135\ntestset: URL, BLEU: 1.4, chr-F: 0.194\ntestset: URL, BLEU: 18.8, chr-F: 0.422\ntestset: URL, BLEU: 41.2, chr-F: 0.591\ntestset: URL, BLEU: 27.9, chr-F: 0.503\ntestset: URL, BLEU: 0.7, chr-F: 0.125\ntestset: URL, BLEU: 0.1, chr-F: 0.062\ntestset: URL, BLEU: 30.7, chr-F: 0.540\ntestset: URL, BLEU: 4.9, chr-F: 0.283\ntestset: URL, BLEU: 3.9, chr-F: 0.217\ntestset: URL, BLEU: 5.9, chr-F: 0.276\ntestset: URL, BLEU: 4.8, chr-F: 0.239\ntestset: URL, BLEU: 34.6, chr-F: 0.551\ntestset: URL, BLEU: 0.2, chr-F: 0.099\ntestset: URL, BLEU: 5.5, chr-F: 0.040\ntestset: URL, BLEU: 13.1, chr-F: 0.357\ntestset: URL, BLEU: 0.4, chr-F: 0.085\ntestset: URL, BLEU: 7.4, chr-F: 0.293\ntestset: URL, BLEU: 20.0, chr-F: 0.415\ntestset: URL, BLEU: 29.9, chr-F: 0.528\ntestset: URL, BLEU: 5.9, chr-F: 0.220\ntestset: URL, BLEU: 0.5, chr-F: 0.137\ntestset: URL, BLEU: 0.1, chr-F: 0.009\ntestset: URL, BLEU: 0.0, chr-F: 0.005\ntestset: URL, BLEU: 0.5, chr-F: 0.103\ntestset: URL, BLEU: 6.4, chr-F: 0.241\ntestset: URL, BLEU: 28.2, chr-F: 0.460\ntestset: URL, BLEU: 26.0, chr-F: 0.485\ntestset: URL, BLEU: 0.8, chr-F: 0.228\ntestset: URL, BLEU: 11.2, chr-F: 0.364\ntestset: URL, BLEU: 10.6, chr-F: 0.277\ntestset: URL, BLEU: 10.9, chr-F: 0.307\ntestset: URL, BLEU: 13.8, chr-F: 0.368\ntestset: URL, BLEU: 33.8, chr-F: 0.571\ntestset: URL, BLEU: 3.0, chr-F: 0.007\ntestset: URL, BLEU: 4.8, chr-F: 0.005\ntestset: URL, BLEU: 0.4, chr-F: 0.092\ntestset: URL, BLEU: 9.0, chr-F: 0.174\ntestset: URL, BLEU: 0.5, chr-F: 0.144\ntestset: URL, BLEU: 0.1, chr-F: 0.000\ntestset: URL, BLEU: 7.7, chr-F: 0.333\ntestset: URL, BLEU: 25.1, chr-F: 0.480\ntestset: URL, BLEU: 0.4, chr-F: 0.101\ntestset: URL, BLEU: 21.0, chr-F: 0.492\ntestset: URL, BLEU: 0.5, chr-F: 0.143\ntestset: URL, BLEU: 0.5, chr-F: 0.135\ntestset: URL, BLEU: 15.6, chr-F: 0.345\ntestset: URL, BLEU: 9.3, chr-F: 0.251\ntestset: URL, BLEU: 9.5, chr-F: 0.326\ntestset: URL, BLEU: 54.1, chr-F: 0.747\ntestset: URL, BLEU: 29.8, chr-F: 0.503\ntestset: URL, BLEU: 20.0, chr-F: 0.449\ntestset: URL, BLEU: 9.3, chr-F: 0.231\ntestset: URL, BLEU: 12.2, chr-F: 0.357\ntestset: URL, BLEU: 0.2, chr-F: 0.003\ntestset: URL, BLEU: 37.1, chr-F: 0.570\ntestset: URL, BLEU: 0.5, chr-F: 0.078\ntestset: URL, BLEU: 38.4, chr-F: 0.575\ntestset: URL, BLEU: 4.8, chr-F: 0.249\ntestset: URL, BLEU: 2.8, chr-F: 0.185\ntestset: URL, BLEU: 0.1, chr-F: 0.011\ntestset: URL, BLEU: 2.6, chr-F: 0.166\ntestset: URL, BLEU: 2.6, chr-F: 0.214\ntestset: URL, BLEU: 39.8, chr-F: 0.566\ntestset: URL, BLEU: 1.0, chr-F: 0.131\ntestset: URL, BLEU: 0.9, chr-F: 0.124\ntestset: URL, BLEU: 26.2, chr-F: 0.500\ntestset: URL, BLEU: 31.5, chr-F: 0.545\ntestset: URL, BLEU: 0.2, chr-F: 0.088\ntestset: URL, BLEU: 0.4, chr-F: 0.108\ntestset: URL, BLEU: 1.8, chr-F: 0.192\ntestset: URL, BLEU: 7.6, chr-F: 0.313\ntestset: URL, BLEU: 27.6, chr-F: 0.508\ntestset: URL, BLEU: 0.1, chr-F: 0.011\ntestset: URL, BLEU: 28.6, chr-F: 0.496\ntestset: URL, BLEU: 2.0, chr-F: 0.098\ntestset: URL, BLEU: 0.9, chr-F: 0.080\ntestset: URL, BLEU: 24.5, chr-F: 0.501\ntestset: URL, BLEU: 1.3, chr-F: 0.105\ntestset: URL, BLEU: 3.0, chr-F: 0.178\ntestset: URL, BLEU: 12.5, chr-F: 0.298\ntestset: URL, BLEU: 1.7, chr-F: 0.214\ntestset: URL, BLEU: 36.3, chr-F: 0.575\ntestset: URL, BLEU: 22.1, chr-F: 0.459\ntestset: URL, BLEU: 5.2, chr-F: 0.316\ntestset: URL, BLEU: 42.4, chr-F: 0.591\ntestset: URL, BLEU: 0.6, chr-F: 0.145\ntestset: URL, BLEU: 1.9, chr-F: 0.255\ntestset: URL, BLEU: 0.3, chr-F: 0.054\ntestset: URL, BLEU: 27.3, chr-F: 0.478\ntestset: URL, BLEU: 7.0, chr-F: 0.310\ntestset: URL, BLEU: 0.9, chr-F: 0.116\ntestset: URL, BLEU: 4.0, chr-F: 0.164\ntestset: URL, BLEU: 5.9, chr-F: 0.260\ntestset: URL, BLEU: 0.4, chr-F: 0.071\ntestset: URL, BLEU: 20.1, chr-F: 0.420\ntestset: URL, BLEU: 0.6, chr-F: 0.057\ntestset: URL, BLEU: 22.8, chr-F: 0.278\ntestset: URL, BLEU: 9.0, chr-F: 0.360\ntestset: URL, BLEU: 19.0, chr-F: 0.324\ntestset: URL, BLEU: 35.8, chr-F: 0.523\ntestset: URL, BLEU: 35.7, chr-F: 0.495\ntestset: URL, BLEU: 42.7, chr-F: 0.644\ntestset: URL, BLEU: 22.4, chr-F: 0.477\ntestset: URL, BLEU: 4.3, chr-F: 0.141\ntestset: URL, BLEU: 9.0, chr-F: 0.345\ntestset: URL, BLEU: 16.0, chr-F: 0.289\ntestset: URL, BLEU: 4.1, chr-F: 0.143\ntestset: URL, BLEU: 3.0, chr-F: 0.247\ntestset: URL, BLEU: 11.6, chr-F: 0.294\ntestset: URL, BLEU: 19.0, chr-F: 0.220\ntestset: URL, BLEU: 4.8, chr-F: 0.188\ntestset: URL, BLEU: 6.1, chr-F: 0.136\ntestset: URL, BLEU: 16.0, chr-F: 0.054\ntestset: URL, BLEU: 0.7, chr-F: 0.124\ntestset: URL, BLEU: 5.4, chr-F: 0.238\ntestset: URL, BLEU: 10.5, chr-F: 0.155\ntestset: URL, BLEU: 18.6, chr-F: 0.427\ntestset: URL, BLEU: 38.9, chr-F: 0.611\ntestset: URL, BLEU: 6.8, chr-F: 0.276\ntestset: URL, BLEU: 10.5, chr-F: 0.138\ntestset: URL, BLEU: 12.7, chr-F: 0.088\ntestset: URL, BLEU: 7.6, chr-F: 0.109\ntestset: URL, BLEU: 18.8, chr-F: 0.254\ntestset: URL, BLEU: 21.4, chr-F: 0.339\ntestset: URL, BLEU: 4.0, chr-F: 0.440\ntestset: URL, BLEU: 5.3, chr-F: 0.231\ntestset: URL, BLEU: 24.9, chr-F: 0.420\ntestset: URL, BLEU: 0.0, chr-F: 0.056\ntestset: URL, BLEU: 16.0, chr-F: 0.171\ntestset: URL, BLEU: 2.1, chr-F: 0.258\ntestset: URL, BLEU: 43.5, chr-F: 0.557\ntestset: URL, BLEU: 21.3, chr-F: 0.402\ntestset: URL, BLEU: 3.0, chr-F: 0.164\ntestset: URL, BLEU: 12.7, chr-F: 0.142\ntestset: URL, BLEU: 10.5, chr-F: 0.131\ntestset: URL, BLEU: 0.6, chr-F: 0.087\ntestset: URL, BLEU: 26.2, chr-F: 0.443\ntestset: URL, BLEU: 3.6, chr-F: 0.176\ntestset: URL, BLEU: 0.0, chr-F: 0.632\ntestset: URL, BLEU: 5.8, chr-F: 0.163\ntestset: URL, BLEU: 14.5, chr-F: 0.104\ntestset: URL, BLEU: 53.7, chr-F: 0.504\ntestset: URL, BLEU: 8.5, chr-F: 0.311\ntestset: URL, BLEU: 8.7, chr-F: 0.259\ntestset: URL, BLEU: 10.3, chr-F: 0.303\ntestset: URL, BLEU: 1.3, chr-F: 0.006\ntestset: URL, BLEU: 8.6, chr-F: 0.331\ntestset: URL, BLEU: 7.2, chr-F: 0.301\ntestset: URL, BLEU: 0.4, chr-F: 0.074\ntestset: URL, BLEU: 14.4, chr-F: 0.256\ntestset: URL, BLEU: 9.8, chr-F: 0.325\ntestset: URL, BLEU: 6.6, chr-F: 0.127\ntestset: URL, BLEU: 50.0, chr-F: 0.657\ntestset: URL, BLEU: 4.5, chr-F: 0.223\ntestset: URL, BLEU: 8.6, chr-F: 0.316\ntestset: URL, BLEU: 19.1, chr-F: 0.445\ntestset: URL, BLEU: 9.8, chr-F: 0.313\ntestset: URL, BLEU: 9.1, chr-F: 0.318\ntestset: URL, BLEU: 4.8, chr-F: 0.213\ntestset: URL, BLEU: 2.0, chr-F: 0.138\ntestset: URL, BLEU: 49.7, chr-F: 0.630\ntestset: URL, BLEU: 1.0, chr-F: 0.105\ntestset: URL, BLEU: 0.0, chr-F: 0.011\ntestset: URL, BLEU: 4.1, chr-F: 0.194\ntestset: URL, BLEU: 23.0, chr-F: 0.410\ntestset: URL, BLEU: 22.2, chr-F: 0.448\ntestset: URL, BLEU: 6.4, chr-F: 0.341\ntestset: URL, BLEU: 1.2, chr-F: 0.035\ntestset: URL, BLEU: 3.4, chr-F: 0.204\ntestset: URL, BLEU: 31.2, chr-F: 0.528\ntestset: URL, BLEU: 33.9, chr-F: 0.570\ntestset: URL, BLEU: 26.9, chr-F: 0.490\ntestset: URL, BLEU: 0.2, chr-F: 0.039\ntestset: URL, BLEU: 0.3, chr-F: 0.061\ntestset: URL, BLEU: 17.3, chr-F: 0.455\ntestset: URL, BLEU: 47.1, chr-F: 0.634\ntestset: URL, BLEU: 31.1, chr-F: 0.530\ntestset: URL, BLEU: 0.7, chr-F: 0.061\ntestset: URL, BLEU: 32.4, chr-F: 0.544\ntestset: URL, BLEU: 40.1, chr-F: 0.583\ntestset: URL, BLEU: 5.1, chr-F: 0.207\ntestset: URL, BLEU: 1.8, chr-F: 0.304\ntestset: URL, BLEU: 5.6, chr-F: 0.233\ntestset: URL, BLEU: 0.3, chr-F: 0.149\ntestset: URL, BLEU: 6.4, chr-F: 0.412\ntestset: URL, BLEU: 11.4, chr-F: 0.357\ntestset: URL, BLEU: 0.1, chr-F: 0.067\ntestset: URL, BLEU: 9.1, chr-F: 0.316\ntestset: URL, BLEU: 16.8, chr-F: 0.416\ntestset: URL, BLEU: 34.5, chr-F: 0.562\ntestset: URL, BLEU: 5.5, chr-F: 0.204\ntestset: URL, BLEU: 0.2, chr-F: 0.001\ntestset: URL, BLEU: 0.1, chr-F: 0.006\ntestset: URL, BLEU: 20.8, chr-F: 0.424\ntestset: URL, BLEU: 28.9, chr-F: 0.511\ntestset: URL, BLEU: 5.1, chr-F: 0.336\ntestset: URL, BLEU: 11.5, chr-F: 0.401\ntestset: URL, BLEU: 17.2, chr-F: 0.362\ntestset: URL, BLEU: 37.7, chr-F: 0.606\ntestset: URL, BLEU: 2.8, chr-F: 0.148\ntestset: URL, BLEU: 14.3, chr-F: 0.188\ntestset: URL, BLEU: 0.4, chr-F: 0.129\ntestset: URL, BLEU: 2.8, chr-F: 0.258\ntestset: URL, BLEU: 30.3, chr-F: 0.490\ntestset: URL, BLEU: 0.3, chr-F: 0.099\ntestset: URL, BLEU: 18.3, chr-F: 0.461\ntestset: URL, BLEU: 0.6, chr-F: 0.185\ntestset: URL, BLEU: 1.2, chr-F: 0.163\ntestset: URL, BLEU: 15.3, chr-F: 0.385\ntestset: URL, BLEU: 45.7, chr-F: 0.393\ntestset: URL, BLEU: 29.5, chr-F: 0.498\ntestset: URL, BLEU: 19.4, chr-F: 0.456\ntestset: URL, BLEU: 12.9, chr-F: 0.356\ntestset: URL, BLEU: 33.0, chr-F: 0.532\ntestset: URL, BLEU: 1.2, chr-F: 0.072\ntestset: URL, BLEU: 35.1, chr-F: 0.553\ntestset: URL, BLEU: 6.8, chr-F: 0.313\ntestset: URL, BLEU: 0.2, chr-F: 0.004\ntestset: URL, BLEU: 3.6, chr-F: 0.112\ntestset: URL, BLEU: 78.3, chr-F: 0.917\ntestset: URL, BLEU: 0.1, chr-F: 0.084\ntestset: URL, BLEU: 0.3, chr-F: 0.117\ntestset: URL, BLEU: 22.4, chr-F: 0.468\ntestset: URL, BLEU: 33.0, chr-F: 0.559\ntestset: URL, BLEU: 0.6, chr-F: 0.084\ntestset: URL, BLEU: 5.9, chr-F: 0.278\ntestset: URL, BLEU: 4.2, chr-F: 0.257\ntestset: URL, BLEU: 29.7, chr-F: 0.531\ntestset: URL, BLEU: 28.8, chr-F: 0.498\ntestset: URL, BLEU: 0.4, chr-F: 0.056\ntestset: URL, BLEU: 1.7, chr-F: 0.222\ntestset: URL, BLEU: 2.4, chr-F: 0.207\ntestset: URL, BLEU: 38.6, chr-F: 0.598\ntestset: URL, BLEU: 23.9, chr-F: 0.455\ntestset: URL, BLEU: 1.2, chr-F: 0.159\ntestset: URL, BLEU: 44.2, chr-F: 0.609\ntestset: URL, BLEU: 2.4, chr-F: 0.123\ntestset: URL, BLEU: 2.8, chr-F: 0.244\ntestset: URL, BLEU: 0.5, chr-F: 0.034\ntestset: URL, BLEU: 26.7, chr-F: 0.474\ntestset: URL, BLEU: 2.3, chr-F: 0.333\ntestset: URL, BLEU: 0.6, chr-F: 0.088\ntestset: URL, BLEU: 5.3, chr-F: 0.178\ntestset: URL, BLEU: 8.7, chr-F: 0.271\ntestset: URL, BLEU: 19.2, chr-F: 0.394\ntestset: URL, BLEU: 12.3, chr-F: 0.482\ntestset: URL, BLEU: 8.3, chr-F: 0.286\ntestset: URL, BLEU: 6.1, chr-F: 0.181\ntestset: URL, BLEU: 12.7, chr-F: 0.535\ntestset: URL, BLEU: 4.1, chr-F: 0.144\ntestset: URL, BLEU: 0.5, chr-F: 0.033\ntestset: URL, BLEU: 12.4, chr-F: 0.127\ntestset: URL, BLEU: 6.9, chr-F: 0.233\ntestset: URL, BLEU: 0.5, chr-F: 0.045\ntestset: URL, BLEU: 0.0, chr-F: 0.244\ntestset: URL, BLEU: 4.2, chr-F: 0.280\ntestset: URL, BLEU: 21.7, chr-F: 0.448\ntestset: URL, BLEU: 22.9, chr-F: 0.431\ntestset: URL, BLEU: 10.7, chr-F: 0.140\ntestset: URL, BLEU: 31.8, chr-F: 0.455\ntestset: URL, BLEU: 0.5, chr-F: 0.040\ntestset: URL, BLEU: 0.7, chr-F: 0.204\ntestset: URL, BLEU: 34.8, chr-F: 0.528\ntestset: URL, BLEU: 8.1, chr-F: 0.318\ntestset: URL, BLEU: 21.4, chr-F: 0.324\ntestset: URL, BLEU: 0.1, chr-F: 0.000\ntestset: URL, BLEU: 6.6, chr-F: 0.127\ntestset: URL, BLEU: 35.7, chr-F: 0.576\ntestset: URL, BLEU: 32.6, chr-F: 0.511\ntestset: URL, BLEU: 17.7, chr-F: 0.342\ntestset: URL, BLEU: 12.1, chr-F: 0.304\ntestset: URL, BLEU: 31.7, chr-F: 0.438\ntestset: URL, BLEU: 30.6, chr-F: 0.479\ntestset: URL, BLEU: 0.5, chr-F: 0.156\ntestset: URL, BLEU: 27.5, chr-F: 0.247\ntestset: URL, BLEU: 16.1, chr-F: 0.330\ntestset: URL, BLEU: 4.0, chr-F: 0.167\ntestset: URL, BLEU: 13.2, chr-F: 0.257\ntestset: URL, BLEU: 6.0, chr-F: 0.241\ntestset: URL, BLEU: 0.0, chr-F: 0.170\ntestset: URL, BLEU: 0.0, chr-F: 0.427\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 31.8, chr-F: 0.374\ntestset: URL, BLEU: 11.5, chr-F: 0.416\ntestset: URL, BLEU: 15.1, chr-F: 0.348\ntestset: URL, BLEU: 17.5, chr-F: 0.329\ntestset: URL, BLEU: 13.1, chr-F: 0.346\ntestset: URL, BLEU: 12.1, chr-F: 0.306\ntestset: URL, BLEU: 8.0, chr-F: 0.035\ntestset: URL, BLEU: 20.8, chr-F: 0.299\ntestset: URL, BLEU: 13.7, chr-F: 0.355\ntestset: URL, BLEU: 24.7, chr-F: 0.423\ntestset: URL, BLEU: 12.7, chr-F: 0.322\ntestset: URL, BLEU: 7.8, chr-F: 0.288\ntestset: URL, BLEU: 13.5, chr-F: 0.390\ntestset: URL, BLEU: 32.0, chr-F: 0.490\ntestset: URL, BLEU: 5.0, chr-F: 0.135\ntestset: URL, BLEU: 18.0, chr-F: 0.403\ntestset: URL, BLEU: 16.9, chr-F: 0.377\ntestset: URL, BLEU: 0.0, chr-F: 0.077\ntestset: URL, BLEU: 2.4, chr-F: 0.328\ntestset: URL, BLEU: 0.0, chr-F: 0.673\ntestset: URL, BLEU: 2.5, chr-F: 0.139\ntestset: URL, BLEU: 24.5, chr-F: 0.458\ntestset: URL, BLEU: 13.3, chr-F: 0.324\ntestset: URL, BLEU: 30.4, chr-F: 0.539\ntestset: URL, BLEU: 30.2, chr-F: 0.448\ntestset: URL, BLEU: 37.9, chr-F: 0.571\ntestset: URL, BLEU: 45.8, chr-F: 0.627\ntestset: URL, BLEU: 31.1, chr-F: 0.561\ntestset: URL, BLEU: 36.2, chr-F: 0.573\ntestset: URL, BLEU: 22.7, chr-F: 0.524\ntestset: URL, BLEU: 47.4, chr-F: 0.674\ntestset: URL, BLEU: 28.4, chr-F: 0.465\ntestset: URL, BLEU: 53.2, chr-F: 0.704\ntestset: URL, BLEU: 1.4, chr-F: 0.140\ntestset: URL, BLEU: 3.2, chr-F: 0.104\ntestset: URL, BLEU: 9.9, chr-F: 0.243\ntestset: URL, BLEU: 6.2, chr-F: 0.269\ntestset: URL, BLEU: 0.0, chr-F: 0.056\ntestset: URL, BLEU: 6.6, chr-F: 0.107\ntestset: URL, BLEU: 12.0, chr-F: 0.356\ntestset: URL, BLEU: 15.7, chr-F: 0.384\ntestset: URL, BLEU: 14.8, chr-F: 0.320\ntestset: URL, BLEU: 4.1, chr-F: 0.292\ntestset: URL, BLEU: 19.0, chr-F: 0.111\ntestset: URL, BLEU: 8.4, chr-F: 0.321\ntestset: URL, BLEU: 0.9, chr-F: 0.064\ntestset: URL, BLEU: 13.5, chr-F: 0.361\ntestset: URL, BLEU: 8.2, chr-F: 0.228\ntestset: URL, BLEU: 31.9, chr-F: 0.610\ntestset: URL, BLEU: 0.0, chr-F: 0.050\ntestset: URL, BLEU: 0.5, chr-F: 0.010\ntestset: URL, BLEU: 4.5, chr-F: 0.206\ntestset: URL, BLEU: 4.2, chr-F: 0.220\ntestset: URL, BLEU: 3.9, chr-F: 0.202\ntestset: URL, BLEU: 16.8, chr-F: 0.389\ntestset: URL, BLEU: 5.2, chr-F: 0.298\ntestset: URL, BLEU: 24.7, chr-F: 0.406\ntestset: URL, BLEU: 0.4, chr-F: 0.137\ntestset: URL, BLEU: 16.8, chr-F: 0.310\ntestset: URL, BLEU: 5.4, chr-F: 0.370\ntestset: URL, BLEU: 4.3, chr-F: 0.170\ntestset: URL, BLEU: 0.6, chr-F: 0.044\ntestset: URL, BLEU: 0.1, chr-F: 0.050\ntestset: URL, BLEU: 0.2, chr-F: 0.064\ntestset: URL, BLEU: 3.1, chr-F: 0.013\ntestset: URL, BLEU: 0.2, chr-F: 0.050\ntestset: URL, BLEU: 2.7, chr-F: 0.155\ntestset: URL, BLEU: 4.7, chr-F: 0.198\ntestset: URL, BLEU: 1.9, chr-F: 0.146\ntestset: URL, BLEU: 12.8, chr-F: 0.234\ntestset: URL, BLEU: 0.5, chr-F: 0.114\ntestset: URL, BLEU: 0.8, chr-F: 0.163\ntestset: URL, BLEU: 2.4, chr-F: 0.141\ntestset: URL, BLEU: 12.6, chr-F: 0.393\ntestset: URL, BLEU: 15.9, chr-F: 0.322\ntestset: URL, BLEU: 19.0, chr-F: 0.308\ntestset: URL, BLEU: 15.9, chr-F: 0.301\ntestset: URL, BLEU: 14.7, chr-F: 0.250\ntestset: URL, BLEU: 38.5, chr-F: 0.522\ntestset: URL, BLEU: 17.6, chr-F: 0.424\ntestset: URL, BLEU: 32.0, chr-F: 0.472\ntestset: URL, BLEU: 31.2, chr-F: 0.496\ntestset: URL, BLEU: 40.1, chr-F: 0.579\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 27.8, chr-F: 0.543\ntestset: URL, BLEU: 32.9, chr-F: 0.545\ntestset: URL, BLEU: 38.6, chr-F: 0.563\ntestset: URL, BLEU: 2.3, chr-F: 0.299\ntestset: URL, BLEU: 33.3, chr-F: 0.548\ntestset: URL, BLEU: 37.9, chr-F: 0.602\ntestset: URL, BLEU: 9.8, chr-F: 0.289\ntestset: URL, BLEU: 38.0, chr-F: 0.718\ntestset: URL, BLEU: 31.8, chr-F: 0.528\ntestset: URL, BLEU: 31.7, chr-F: 0.548\ntestset: URL, BLEU: 28.1, chr-F: 0.484\ntestset: URL, BLEU: 38.9, chr-F: 0.596\ntestset: URL, BLEU: 38.6, chr-F: 0.589\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 36.0, chr-F: 0.557\ntestset: URL, BLEU: 8.1, chr-F: 0.441\ntestset: URL, BLEU: 8.9, chr-F: 0.439\ntestset: URL, BLEU: 8.8, chr-F: 0.288\ntestset: URL, BLEU: 26.1, chr-F: 0.414\ntestset: URL, BLEU: 25.5, chr-F: 0.440\ntestset: URL, BLEU: 30.1, chr-F: 0.449\ntestset: URL, BLEU: 12.6, chr-F: 0.412\ntestset: URL, BLEU: 9.9, chr-F: 0.416\ntestset: URL, BLEU: 8.4, chr-F: 0.289\ntestset: URL, BLEU: 21.2, chr-F: 0.395\ntestset: URL, BLEU: 25.9, chr-F: 0.384\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 10.4, chr-F: 0.376\ntestset: URL, BLEU: 18.1, chr-F: 0.373\ntestset: URL, BLEU: 24.4, chr-F: 0.467\ntestset: URL, BLEU: 42.9, chr-F: 0.583\ntestset: URL, BLEU: 19.5, chr-F: 0.444\ntestset: URL, BLEU: 11.6, chr-F: 0.323\ntestset: URL, BLEU: 22.1, chr-F: 0.398\ntestset: URL, BLEU: 32.1, chr-F: 0.386\ntestset: URL, BLEU: 21.9, chr-F: 0.407\ntestset: URL, BLEU: 29.3, chr-F: 0.476\ntestset: URL, BLEU: 40.5, chr-F: 0.708\ntestset: URL, BLEU: 0.0, chr-F: 0.034\ntestset: URL, BLEU: 38.1, chr-F: 0.582\ntestset: URL, BLEU: 31.8, chr-F: 0.511\ntestset: URL, BLEU: 29.8, chr-F: 0.483\ntestset: URL, BLEU: 39.8, chr-F: 0.336\ntestset: URL, BLEU: 26.3, chr-F: 0.441\ntestset: URL, BLEU: 27.3, chr-F: 0.469\ntestset: URL, BLEU: 1.9, chr-F: 0.047\ntestset: URL, BLEU: 28.9, chr-F: 0.501\ntestset: URL, BLEU: 2.6, chr-F: 0.135\ntestset: URL, BLEU: 59.6, chr-F: 0.740\ntestset: URL, BLEU: 0.1, chr-F: 0.012\ntestset: URL, BLEU: 40.2, chr-F: 0.566\ntestset: URL, BLEU: 19.7, chr-F: 0.358\ntestset: URL, BLEU: 17.4, chr-F: 0.465\ntestset: URL, BLEU: 18.0, chr-F: 0.386\ntestset: URL, BLEU: 30.7, chr-F: 0.496\ntestset: URL, BLEU: 10.7, chr-F: 0.133\ntestset: URL, BLEU: 38.1, chr-F: 0.539\ntestset: URL, BLEU: 53.2, chr-F: 0.676\ntestset: URL, BLEU: 3.8, chr-F: 0.125\ntestset: URL, BLEU: 3.4, chr-F: 0.252\ntestset: URL, BLEU: 24.2, chr-F: 0.460\ntestset: URL, BLEU: 12.1, chr-F: 0.427\ntestset: URL, BLEU: 4.7, chr-F: 0.287\ntestset: URL, BLEU: 27.8, chr-F: 0.482\ntestset: URL, BLEU: 40.6, chr-F: 0.608\ntestset: URL, BLEU: 23.1, chr-F: 0.450\ntestset: URL, BLEU: 0.8, chr-F: 0.060\ntestset: URL, BLEU: 10.1, chr-F: 0.375\ntestset: URL, BLEU: 38.9, chr-F: 0.577\ntestset: URL, BLEU: 31.7, chr-F: 0.539\ntestset: URL, BLEU: 0.2, chr-F: 0.061\ntestset: URL, BLEU: 31.5, chr-F: 0.539\ntestset: URL, BLEU: 47.4, chr-F: 0.633\ntestset: URL, BLEU: 6.4, chr-F: 0.247\ntestset: URL, BLEU: 4.2, chr-F: 0.236\ntestset: URL, BLEU: 46.6, chr-F: 0.642\ntestset: URL, BLEU: 20.0, chr-F: 0.409\ntestset: URL, BLEU: 7.8, chr-F: 0.312\ntestset: URL, BLEU: 36.3, chr-F: 0.577\ntestset: URL, BLEU: 1.1, chr-F: 0.030\ntestset: URL, BLEU: 39.4, chr-F: 0.595\ntestset: URL, BLEU: 18.5, chr-F: 0.408\ntestset: URL, BLEU: 1.9, chr-F: 0.160\ntestset: URL, BLEU: 1.0, chr-F: 0.178\ntestset: URL, BLEU: 7.1, chr-F: 0.320\ntestset: URL, BLEU: 29.0, chr-F: 0.511\ntestset: URL, BLEU: 0.2, chr-F: 0.107\ntestset: URL, BLEU: 20.7, chr-F: 0.475\ntestset: URL, BLEU: 20.6, chr-F: 0.373\ntestset: URL, BLEU: 14.3, chr-F: 0.409\ntestset: URL, BLEU: 13.3, chr-F: 0.378\ntestset: URL, BLEU: 37.8, chr-F: 0.578\ntestset: URL, BLEU: 35.7, chr-F: 0.578\ntestset: URL, BLEU: 11.0, chr-F: 0.369\ntestset: URL, BLEU: 1.2, chr-F: 0.010\ntestset: URL, BLEU: 0.2, chr-F: 0.110\ntestset: URL, BLEU: 25.9, chr-F: 0.507\ntestset: URL, BLEU: 36.8, chr-F: 0.597\ntestset: URL, BLEU: 34.3, chr-F: 0.574\ntestset: URL, BLEU: 28.5, chr-F: 0.494\ntestset: URL, BLEU: 11.7, chr-F: 0.364\ntestset: URL, BLEU: 46.3, chr-F: 0.653\ntestset: URL, BLEU: 21.9, chr-F: 0.418\ntestset: URL, BLEU: 37.7, chr-F: 0.562\ntestset: URL, BLEU: 33.1, chr-F: 0.538\ntestset: URL, BLEU: 0.8, chr-F: 0.095\ntestset: URL, BLEU: 10.3, chr-F: 0.280\ntestset: URL, BLEU: 3.9, chr-F: 0.098\ntestset: URL, BLEU: 5.0, chr-F: 0.217\ntestset: URL, BLEU: 12.2, chr-F: 0.357\ntestset: URL, BLEU: 4.1, chr-F: 0.237\ntestset: URL, BLEU: 5.3, chr-F: 0.299\ntestset: URL, BLEU: 15.3, chr-F: 0.322\ntestset: URL, BLEU: 0.0, chr-F: 0.095\ntestset: URL, BLEU: 11.3, chr-F: 0.272\ntestset: URL, BLEU: 0.0, chr-F: 0.069\ntestset: URL, BLEU: 35.4, chr-F: 0.540\ntestset: URL, BLEU: 24.3, chr-F: 0.509\ntestset: URL, BLEU: 12.0, chr-F: 0.226\ntestset: URL, BLEU: 10.0, chr-F: 0.205\ntestset: URL, BLEU: 5.5, chr-F: 0.048\ntestset: URL, BLEU: 16.5, chr-F: 0.236\ntestset: URL, BLEU: 7.6, chr-F: 0.081\ntestset: URL, BLEU: 1.6, chr-F: 0.013\ntestset: URL, BLEU: 11.4, chr-F: 0.362\ntestset: URL, BLEU: 0.2, chr-F: 0.067\ntestset: URL, BLEU: 6.1, chr-F: 0.240\ntestset: URL, BLEU: 1.9, chr-F: 0.161\ntestset: URL, BLEU: 3.3, chr-F: 0.155\ntestset: URL, BLEU: 31.9, chr-F: 0.184\ntestset: URL, BLEU: 5.0, chr-F: 0.230\ntestset: URL, BLEU: 37.0, chr-F: 0.295\ntestset: URL, BLEU: 1.3, chr-F: 0.184\ntestset: URL, BLEU: 39.1, chr-F: 0.426\ntestset: URL, BLEU: 4.3, chr-F: 0.206\ntestset: URL, BLEU: 2.1, chr-F: 0.164\ntestset: URL, BLEU: 1.4, chr-F: 0.046\ntestset: URL, BLEU: 9.7, chr-F: 0.330\ntestset: URL, BLEU: 35.4, chr-F: 0.529\ntestset: URL, BLEU: 33.1, chr-F: 0.604\ntestset: URL, BLEU: 15.4, chr-F: 0.325\ntestset: URL, BLEU: 19.3, chr-F: 0.405\ntestset: URL, BLEU: 23.1, chr-F: 0.421\ntestset: URL, BLEU: 2.2, chr-F: 0.173\ntestset: URL, BLEU: 5.2, chr-F: 0.194\ntestset: URL, BLEU: 26.3, chr-F: 0.405\ntestset: URL, BLEU: 0.0, chr-F: 0.170\ntestset: URL, BLEU: 21.4, chr-F: 0.347\ntestset: URL, BLEU: 1.2, chr-F: 0.058\ntestset: URL, BLEU: 22.7, chr-F: 0.479\ntestset: URL, BLEU: 2.4, chr-F: 0.190\ntestset: URL, BLEU: 3.4, chr-F: 0.239\ntestset: URL, BLEU: 45.5, chr-F: 0.580\ntestset: URL, BLEU: 23.0, chr-F: 0.690\ntestset: URL, BLEU: 33.5, chr-F: 0.449\ntestset: URL, BLEU: 66.9, chr-F: 0.951\ntestset: URL, BLEU: 0.0, chr-F: 0.076\ntestset: URL, BLEU: 27.5, chr-F: 0.448\ntestset: URL, BLEU: 78.3, chr-F: 0.693\ntestset: URL, BLEU: 6.5, chr-F: 0.308\ntestset: URL, BLEU: 0.0, chr-F: 0.179\ntestset: URL, BLEU: 59.5, chr-F: 0.602\ntestset: URL, BLEU: 37.0, chr-F: 0.553\ntestset: URL, BLEU: 66.9, chr-F: 0.783\ntestset: URL, BLEU: 8.1, chr-F: 0.282\ntestset: URL, BLEU: 4.8, chr-F: 0.212\ntestset: URL, BLEU: 5.0, chr-F: 0.237\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 0.9, chr-F: 0.068\ntestset: URL, BLEU: 10.6, chr-F: 0.284\ntestset: URL, BLEU: 27.5, chr-F: 0.481\ntestset: URL, BLEU: 15.6, chr-F: 0.331\ntestset: URL, BLEU: 2.9, chr-F: 0.203\ntestset: URL, BLEU: 29.4, chr-F: 0.479\ntestset: URL, BLEU: 19.9, chr-F: 0.391\ntestset: URL, BLEU: 20.5, chr-F: 0.396\ntestset: URL, BLEU: 1.0, chr-F: 0.082\ntestset: URL, BLEU: 7.9, chr-F: 0.407\ntestset: URL, BLEU: 9.3, chr-F: 0.286\ntestset: URL, BLEU: 7.1, chr-F: 0.192\ntestset: URL, BLEU: 3.6, chr-F: 0.150\ntestset: URL, BLEU: 0.2, chr-F: 0.001\ntestset: URL, BLEU: 15.1, chr-F: 0.322\ntestset: URL, BLEU: 8.3, chr-F: 0.108\ntestset: URL, BLEU: 20.7, chr-F: 0.415\ntestset: URL, BLEU: 7.9, chr-F: 0.260\ntestset: URL, BLEU: 0.2, chr-F: 0.087\ntestset: URL, BLEU: 5.6, chr-F: 0.301\ntestset: URL, BLEU: 10.2, chr-F: 0.352\ntestset: URL, BLEU: 24.3, chr-F: 0.444\ntestset: URL, BLEU: 14.5, chr-F: 0.338\ntestset: URL, BLEU: 0.1, chr-F: 0.006\ntestset: URL, BLEU: 21.8, chr-F: 0.412\ntestset: URL, BLEU: 12.2, chr-F: 0.336\ntestset: URL, BLEU: 12.7, chr-F: 0.343\ntestset: URL, BLEU: 16.6, chr-F: 0.362\ntestset: URL, BLEU: 3.2, chr-F: 0.215\ntestset: URL, BLEU: 18.9, chr-F: 0.414\ntestset: URL, BLEU: 53.4, chr-F: 0.708\ntestset: URL, BLEU: 14.0, chr-F: 0.343\ntestset: URL, BLEU: 2.1, chr-F: 0.182\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 34.5, chr-F: 0.540\ntestset: URL, BLEU: 33.6, chr-F: 0.520\ntestset: URL, BLEU: 40.5, chr-F: 0.598\ntestset: URL, BLEU: 72.7, chr-F: 0.770\ntestset: URL, BLEU: 30.5, chr-F: 0.570\ntestset: URL, BLEU: 5.7, chr-F: 0.362\ntestset: URL, BLEU: 23.5, chr-F: 0.504\ntestset: URL, BLEU: 13.7, chr-F: 0.550\ntestset: URL, BLEU: 37.6, chr-F: 0.551\ntestset: URL, BLEU: 32.5, chr-F: 0.517\ntestset: URL, BLEU: 8.6, chr-F: 0.483\ntestset: URL, BLEU: 26.6, chr-F: 0.511\ntestset: URL, BLEU: 95.1, chr-F: 0.958\ntestset: URL, BLEU: 9.0, chr-F: 0.488\ntestset: URL, BLEU: 6.8, chr-F: 0.251\ntestset: URL, BLEU: 12.2, chr-F: 0.329\ntestset: URL, BLEU: 10.4, chr-F: 0.366\ntestset: URL, BLEU: 25.7, chr-F: 0.472\ntestset: URL, BLEU: 37.5, chr-F: 0.551\ntestset: URL, BLEU: 32.1, chr-F: 0.489\ntestset: URL, BLEU: 22.3, chr-F: 0.460\ntestset: URL, BLEU: 7.4, chr-F: 0.195\ntestset: URL, BLEU: 22.6, chr-F: 0.378\ntestset: URL, BLEU: 9.7, chr-F: 0.282\ntestset: URL, BLEU: 7.2, chr-F: 0.374\ntestset: URL, BLEU: 30.9, chr-F: 0.529\ntestset: URL, BLEU: 25.0, chr-F: 0.439\ntestset: URL, BLEU: 30.6, chr-F: 0.504\ntestset: URL, BLEU: 8.6, chr-F: 0.331\ntestset: URL, BLEU: 32.9, chr-F: 0.516\ntestset: URL, BLEU: 19.6, chr-F: 0.371\ntestset: URL, BLEU: 6.5, chr-F: 0.360\ntestset: URL, BLEU: 13.7, chr-F: 0.310\ntestset: URL, BLEU: 13.1, chr-F: 0.368\ntestset: URL, BLEU: 3.4, chr-F: 0.064\ntestset: URL, BLEU: 9.3, chr-F: 0.351\ntestset: URL, BLEU: 22.3, chr-F: 0.323\ntestset: URL, BLEU: 10.9, chr-F: 0.333\ntestset: URL, BLEU: 49.5, chr-F: 0.589\ntestset: URL, BLEU: 0.0, chr-F: 0.051\ntestset: URL, BLEU: 9.7, chr-F: 0.353\ntestset: URL, BLEU: 65.1, chr-F: 0.463\ntestset: URL, BLEU: 35.6, chr-F: 0.533\ntestset: URL, BLEU: 33.7, chr-F: 0.448\ntestset: URL, BLEU: 24.3, chr-F: 0.451\ntestset: URL, BLEU: 23.4, chr-F: 0.621\ntestset: URL, BLEU: 0.5, chr-F: 0.104\ntestset: URL, BLEU: 14.2, chr-F: 0.412\ntestset: URL, BLEU: 7.8, chr-F: 0.179\ntestset: URL, BLEU: 7.6, chr-F: 0.106\ntestset: URL, BLEU: 32.4, chr-F: 0.488\ntestset: URL, BLEU: 27.8, chr-F: 0.599\ntestset: URL, BLEU: 12.7, chr-F: 0.319\ntestset: URL, BLEU: 18.0, chr-F: 0.392\ntestset: URL, BLEU: 15.6, chr-F: 0.458\ntestset: URL, BLEU: 0.6, chr-F: 0.065\ntestset: URL, BLEU: 32.5, chr-F: 0.403\ntestset: URL, BLEU: 1.4, chr-F: 0.236\ntestset: URL, BLEU: 49.8, chr-F: 0.429\ntestset: URL, BLEU: 18.6, chr-F: 0.460\ntestset: URL, BLEU: 5.1, chr-F: 0.230\ntestset: URL, BLEU: 14.2, chr-F: 0.379\ntestset: URL, BLEU: 20.0, chr-F: 0.422\ntestset: URL, BLEU: 40.7, chr-F: 0.470\ntestset: URL, BLEU: 7.3, chr-F: 0.407\ntestset: URL, BLEU: 35.4, chr-F: 0.638\ntestset: URL, BLEU: 49.0, chr-F: 0.615\ntestset: URL, BLEU: 42.7, chr-F: 0.655\ntestset: URL, BLEU: 9.7, chr-F: 0.362\ntestset: URL, BLEU: 61.6, chr-F: 0.819\ntestset: URL, BLEU: 15.0, chr-F: 0.506\ntestset: URL, BLEU: 31.0, chr-F: 0.548\ntestset: URL, BLEU: 35.8, chr-F: 0.524\ntestset: URL, BLEU: 30.2, chr-F: 0.486\ntestset: URL, BLEU: 32.5, chr-F: 0.589\ntestset: URL, BLEU: 16.6, chr-F: 0.557\ntestset: URL, BLEU: 11.6, chr-F: 0.395\ntestset: URL, BLEU: 42.7, chr-F: 0.680\ntestset: URL, BLEU: 53.7, chr-F: 0.833\ntestset: URL, BLEU: 10.1, chr-F: 0.492\ntestset: URL, BLEU: 9.7, chr-F: 0.196\ntestset: URL, BLEU: 24.7, chr-F: 0.727\ntestset: URL, BLEU: 43.2, chr-F: 0.601\ntestset: URL, BLEU: 23.6, chr-F: 0.361\ntestset: URL, BLEU: 42.7, chr-F: 0.864\ntestset: URL, BLEU: 3.4, chr-F: 0.323\ntestset: URL, BLEU: 17.1, chr-F: 0.418\ntestset: URL, BLEU: 1.8, chr-F: 0.199\ntestset: URL, BLEU: 11.9, chr-F: 0.258\ntestset: URL, BLEU: 3.4, chr-F: 0.115\ntestset: URL, BLEU: 0.0, chr-F: 0.000\ntestset: URL, BLEU: 23.5, chr-F: 0.470\ntestset: URL, BLEU: 19.7, chr-F: 0.490\ntestset: URL, BLEU: 27.8, chr-F: 0.472\ntestset: URL, BLEU: 2.0, chr-F: 0.232\ntestset: URL, BLEU: 5.9, chr-F: 0.241\ntestset: URL, BLEU: 25.9, chr-F: 0.465\ntestset: URL, BLEU: 1.7, chr-F: 0.195\ntestset: URL, BLEU: 3.4, chr-F: 0.228\ntestset: URL, BLEU: 23.4, chr-F: 0.481\ntestset: URL, BLEU: 11.5, chr-F: 0.304\ntestset: URL, BLEU: 5.8, chr-F: 0.243\ntestset: URL, BLEU: 20.9, chr-F: 0.442\ntestset: URL, BLEU: 14.8, chr-F: 0.431\ntestset: URL, BLEU: 83.8, chr-F: 0.946\ntestset: URL, BLEU: 9.1, chr-F: 0.349\ntestset: URL, BLEU: 15.4, chr-F: 0.385\ntestset: URL, BLEU: 3.4, chr-F: 0.195\ntestset: URL, BLEU: 18.8, chr-F: 0.401\ntestset: URL, BLEU: 0.0, chr-F: 0.056\ntestset: URL, BLEU: 22.6, chr-F: 0.451\ntestset: URL, BLEU: 5.7, chr-F: 0.267\ntestset: URL, BLEU: 8.0, chr-F: 0.102\ntestset: URL, BLEU: 30.8, chr-F: 0.509\ntestset: URL, BLEU: 22.8, chr-F: 0.416\ntestset: URL, BLEU: 7.0, chr-F: 0.321\ntestset: URL, BLEU: 35.4, chr-F: 0.561\ntestset: URL, BLEU: 42.7, chr-F: 0.835\ntestset: URL, BLEU: 38.3, chr-F: 0.491\ntestset: URL, BLEU: 18.5, chr-F: 0.399\ntestset: URL, BLEU: 32.6, chr-F: 0.552\ntestset: URL, BLEU: 18.1, chr-F: 0.426\ntestset: URL, BLEU: 28.9, chr-F: 0.480\ntestset: URL, BLEU: 6.9, chr-F: 0.198\ntestset: URL, BLEU: 6.6, chr-F: 0.187\ntestset: URL, BLEU: 31.9, chr-F: 0.498\ntestset: URL, BLEU: 0.5, chr-F: 0.000\ntestset: URL, BLEU: 0.0, chr-F: 0.023\ntestset: URL, BLEU: 1.2, chr-F: 0.148\ntestset: URL, BLEU: 28.5, chr-F: 0.505\ntestset: URL, BLEU: 7.8, chr-F: 0.164\ntestset: URL, BLEU: 38.2, chr-F: 0.584\ntestset: URL, BLEU: 42.8, chr-F: 0.612\ntestset: URL, BLEU: 15.3, chr-F: 0.405\ntestset: URL, BLEU: 26.0, chr-F: 0.447\ntestset: URL, BLEU: 0.0, chr-F: 0.353\ntestset: URL, BLEU: 24.3, chr-F: 0.440\ntestset: URL, BLEU: 31.7, chr-F: 0.527\ntestset: URL, BLEU: 0.1, chr-F: 0.080\ntestset: URL, BLEU: 20.1, chr-F: 0.464\ntestset: URL, BLEU: 42.8, chr-F: 0.365\ntestset: URL, BLEU: 2.1, chr-F: 0.161\ntestset: URL, BLEU: 50.1, chr-F: 0.670\ntestset: URL, BLEU: 42.7, chr-F: 0.835\ntestset: URL, BLEU: 17.5, chr-F: 0.410\ntestset: URL, BLEU: 3.2, chr-F: 0.189\ntestset: URL, BLEU: 28.7, chr-F: 0.468\ntestset: URL, BLEU: 31.9, chr-F: 0.546\ntestset: URL, BLEU: 24.4, chr-F: 0.504\ntestset: URL, BLEU: 0.6, chr-F: 0.048\ntestset: URL, BLEU: 49.1, chr-F: 0.660\ntestset: URL, BLEU: 38.3, chr-F: 0.589\ntestset: URL, BLEU: 0.2, chr-F: 0.084\ntestset: URL, BLEU: 35.3, chr-F: 0.528\ntestset: URL, BLEU: 42.4, chr-F: 0.602\ntestset: URL, BLEU: 6.1, chr-F: 0.269\ntestset: URL, BLEU: 18.6, chr-F: 0.459\ntestset: URL, BLEU: 35.7, chr-F: 0.549\ntestset: URL, BLEU: 2.8, chr-F: 0.099\ntestset: URL, BLEU: 19.2, chr-F: 0.438\ntestset: URL, BLEU: 35.0, chr-F: 0.576\ntestset: URL, BLEU: 0.5, chr-F: 0.129\ntestset: URL, BLEU: 26.8, chr-F: 0.418\ntestset: URL, BLEU: 35.3, chr-F: 0.580\ntestset: URL, BLEU: 4.2, chr-F: 0.147\ntestset: URL, BLEU: 0.7, chr-F: 0.101\ntestset: URL, BLEU: 6.7, chr-F: 0.314\ntestset: URL, BLEU: 17.6, chr-F: 0.384\ntestset: URL, BLEU: 0.0, chr-F: 0.238\ntestset: URL, BLEU: 3.6, chr-F: 0.210\ntestset: URL, BLEU: 15.9, chr-F: 0.405\ntestset: URL, BLEU: 42.4, chr-F: 0.618\ntestset: URL, BLEU: 9.0, chr-F: 0.306\ntestset: URL, BLEU: 38.9, chr-F: 0.531\ntestset: URL, BLEU: 25.8, chr-F: 0.498\ntestset: URL, BLEU: 31.7, chr-F: 0.535\ntestset: URL, BLEU: 26.6, chr-F: 0.495\ntestset: URL, BLEU: 30.0, chr-F: 0.512\ntestset: URL, BLEU: 4.3, chr-F: 0.299\ntestset: URL, BLEU: 35.0, chr-F: 0.560\ntestset: URL, BLEU: 1.6, chr-F: 0.201\ntestset: URL, BLEU: 72.2, chr-F: 0.801\ntestset: URL, BLEU: 5.0, chr-F: 0.129\ntestset: URL, BLEU: 26.2, chr-F: 0.481\ntestset: URL, BLEU: 3.5, chr-F: 0.133\ntestset: URL, BLEU: 11.5, chr-F: 0.293\ntestset: URL, BLEU: 30.3, chr-F: 0.471\ntestset: URL, BLEU: 90.1, chr-F: 0.839\ntestset: URL, BLEU: 50.0, chr-F: 0.638\ntestset: URL, BLEU: 42.2, chr-F: 0.467\ntestset: URL, BLEU: 3.2, chr-F: 0.188\ntestset: URL, BLEU: 35.4, chr-F: 0.529\ntestset: URL, BLEU: 38.0, chr-F: 0.627\ntestset: URL, BLEU: 3.2, chr-F: 0.072\ntestset: URL, BLEU: 14.7, chr-F: 0.465\ntestset: URL, BLEU: 59.0, chr-F: 0.757\ntestset: URL, BLEU: 32.4, chr-F: 0.560\ntestset: URL, BLEU: 29.9, chr-F: 0.507\ntestset: URL, BLEU: 40.8, chr-F: 0.585\ntestset: URL, BLEU: 4.2, chr-F: 0.303\ntestset: URL, BLEU: 10.0, chr-F: 0.345\ntestset: URL, BLEU: 38.4, chr-F: 0.572\ntestset: URL, BLEU: 18.7, chr-F: 0.375\ntestset: URL, BLEU: 10.7, chr-F: 0.015\ntestset: URL, BLEU: 21.7, chr-F: 0.465\ntestset: URL, BLEU: 14.8, chr-F: 0.307\ntestset: URL, BLEU: 23.2, chr-F: 0.445\ntestset: URL, BLEU: 35.2, chr-F: 0.594\ntestset: URL, BLEU: 10.7, chr-F: 0.037\ntestset: URL, BLEU: 6.6, chr-F: 0.370\ntestset: URL, BLEU: 3.6, chr-F: 0.261\ntestset: URL, BLEU: 12.2, chr-F: 0.404\ntestset: URL, BLEU: 8.0, chr-F: 0.442\ntestset: URL, BLEU: 20.3, chr-F: 0.466\ntestset: URL, BLEU: 39.1, chr-F: 0.598\ntestset: URL, BLEU: 49.0, chr-F: 0.698\ntestset: URL, BLEU: 26.3, chr-F: 0.515\ntestset: URL, BLEU: 31.0, chr-F: 0.543\ntestset: URL, BLEU: 28.0, chr-F: 0.475\ntestset: URL, BLEU: 28.1, chr-F: 0.513\ntestset: URL, BLEU: 1.2, chr-F: 0.193\ntestset: URL, BLEU: 38.2, chr-F: 0.598\ntestset: URL, BLEU: 58.8, chr-F: 0.741\ntestset: URL, BLEU: 29.1, chr-F: 0.515\ntestset: URL, BLEU: 42.6, chr-F: 0.473\ntestset: URL, BLEU: 11.2, chr-F: 0.346\ntestset: URL, BLEU: 13.4, chr-F: 0.331\ntestset: URL, BLEU: 5.3, chr-F: 0.206\ntestset: URL, BLEU: 19.6, chr-F: 0.423\ntestset: URL, BLEU: 24.5, chr-F: 0.493\ntestset: URL, BLEU: 22.5, chr-F: 0.408\ntestset: URL, BLEU: 8.8, chr-F: 0.322\ntestset: URL, BLEU: 16.4, chr-F: 0.387\ntestset: URL, BLEU: 20.4, chr-F: 0.442\ntestset: URL, BLEU: 66.9, chr-F: 0.968\ntestset: URL, BLEU: 3.9, chr-F: 0.168\ntestset: URL, BLEU: 9.1, chr-F: 0.175\ntestset: URL, BLEU: 5.8, chr-F: 0.256\ntestset: URL, BLEU: 8.4, chr-F: 0.243\ntestset: URL, BLEU: 8.9, chr-F: 0.244\ntestset: URL, BLEU: 8.1, chr-F: 0.297\ntestset: URL, BLEU: 1.2, chr-F: 0.207\ntestset: URL, BLEU: 11.6, chr-F: 0.338\ntestset: URL, BLEU: 8.2, chr-F: 0.234\ntestset: URL, BLEU: 7.8, chr-F: 0.331\ntestset: URL, BLEU: 6.4, chr-F: 0.217\ntestset: URL, BLEU: 5.8, chr-F: 0.230\ntestset: URL, BLEU: 10.8, chr-F: 0.279\ntestset: URL, BLEU: 6.0, chr-F: 0.225\ntestset: URL, BLEU: 6.1, chr-F: 0.256\ntestset: URL, BLEU: 0.0, chr-F: 0.626\ntestset: URL, BLEU: 45.7, chr-F: 0.586\ntestset: URL, BLEU: 43.9, chr-F: 0.589\ntestset: URL, BLEU: 0.0, chr-F: 0.347\ntestset: URL, BLEU: 41.9, chr-F: 0.587\ntestset: URL, BLEU: 14.4, chr-F: 0.365\ntestset: URL, BLEU: 5.8, chr-F: 0.274\ntestset: URL, BLEU: 33.0, chr-F: 0.474\ntestset: URL, BLEU: 36.1, chr-F: 0.479\ntestset: URL, BLEU: 0.7, chr-F: 0.026\ntestset: URL, BLEU: 13.1, chr-F: 0.310\ntestset: URL, BLEU: 8.8, chr-F: 0.296\ntestset: URL, BLEU: 13.0, chr-F: 0.309\ntestset: URL, BLEU: 10.0, chr-F: 0.327\ntestset: URL, BLEU: 15.2, chr-F: 0.304\ntestset: URL, BLEU: 10.4, chr-F: 0.352\ntestset: URL, BLEU: 40.2, chr-F: 0.589\ntestset: URL, BLEU: 24.8, chr-F: 0.503\ntestset: URL, BLEU: 29.4, chr-F: 0.508\ntestset: URL, BLEU: 20.3, chr-F: 0.416\ntestset: URL, BLEU: 28.0, chr-F: 0.489\ntestset: URL, BLEU: 1.3, chr-F: 0.052\ntestset: URL, BLEU: 7.0, chr-F: 0.347\ntestset: URL, BLEU: 37.0, chr-F: 0.551\ntestset: URL, BLEU: 29.1, chr-F: 0.508\ntestset: URL, BLEU: 0.8, chr-F: 0.070\ntestset: URL, BLEU: 32.3, chr-F: 0.519\ntestset: URL, BLEU: 34.1, chr-F: 0.531\ntestset: URL, BLEU: 1.2, chr-F: 0.234\ntestset: URL, BLEU: 6.5, chr-F: 0.208\ntestset: URL, BLEU: 30.8, chr-F: 0.510\ntestset: URL, BLEU: 7.2, chr-F: 0.287\ntestset: URL, BLEU: 14.6, chr-F: 0.301\ntestset: URL, BLEU: 18.4, chr-F: 0.498\ntestset: URL, BLEU: 31.8, chr-F: 0.546\ntestset: URL, BLEU: 3.5, chr-F: 0.193\ntestset: URL, BLEU: 11.4, chr-F: 0.336\ntestset: URL, BLEU: 28.5, chr-F: 0.522\ntestset: URL, BLEU: 2.6, chr-F: 0.134\ntestset: URL, BLEU: 16.0, chr-F: 0.265\ntestset: URL, BLEU: 7.2, chr-F: 0.311\ntestset: URL, BLEU: 22.9, chr-F: 0.450\ntestset: URL, BLEU: 21.2, chr-F: 0.493\ntestset: URL, BLEU: 38.0, chr-F: 0.718\ntestset: URL, BLEU: 2.2, chr-F: 0.173\ntestset: URL, BLEU: 14.4, chr-F: 0.370\ntestset: URL, BLEU: 30.6, chr-F: 0.501\ntestset: URL, BLEU: 33.3, chr-F: 0.536\ntestset: URL, BLEU: 4.0, chr-F: 0.282\ntestset: URL, BLEU: 0.4, chr-F: 0.005\ntestset: URL, BLEU: 1.3, chr-F: 0.032\ntestset: URL, BLEU: 25.9, chr-F: 0.491\ntestset: URL, BLEU: 0.0, chr-F: 0.083\ntestset: URL, BLEU: 26.5, chr-F: 0.487\ntestset: URL, BLEU: 34.7, chr-F: 0.550\ntestset: URL, BLEU: 7.4, chr-F: 0.256\ntestset: URL, BLEU: 30.7, chr-F: 0.516\ntestset: URL, BLEU: 35.0, chr-F: 0.530\ntestset: URL, BLEU: 32.8, chr-F: 0.538\ntestset: URL, BLEU: 5.6, chr-F: 0.381\ntestset: URL, BLEU: 4.8, chr-F: 0.146\ntestset: URL, BLEU: 48.1, chr-F: 0.653\ntestset: URL, BLEU: 8.4, chr-F: 0.213\ntestset: URL, BLEU: 42.7, chr-F: 0.835\ntestset: URL, BLEU: 9.7, chr-F: 0.539\ntestset: URL, BLEU: 41.5, chr-F: 0.569\ntestset: URL, BLEU: 36.9, chr-F: 0.612\ntestset: URL, BLEU: 29.0, chr-F: 0.526\ntestset: URL, BLEU: 0.8, chr-F: 0.049\ntestset: URL, BLEU: 51.4, chr-F: 0.668\ntestset: URL, BLEU: 30.8, chr-F: 0.532\ntestset: URL, BLEU: 33.8, chr-F: 0.556\ntestset: URL, BLEU: 44.5, chr-F: 0.622\ntestset: URL, BLEU: 10.7, chr-F: 0.190\ntestset: URL, BLEU: 4.5, chr-F: 0.273\ntestset: URL, BLEU: 43.0, chr-F: 0.625\ntestset: URL, BLEU: 8.9, chr-F: 0.365\ntestset: URL, BLEU: 16.0, chr-F: 0.079\ntestset: URL, BLEU: 12.1, chr-F: 0.315\ntestset: URL, BLEU: 49.2, chr-F: 0.700\ntestset: URL, BLEU: 0.1, chr-F: 0.004\ntestset: URL, BLEU: 39.2, chr-F: 0.575\ntestset: URL, BLEU: 15.5, chr-F: 0.387\ntestset: URL, BLEU: 39.9, chr-F: 0.637\ntestset: URL, BLEU: 3.0, chr-F: 0.133\ntestset: URL, BLEU: 0.6, chr-F: 0.172\ntestset: URL, BLEU: 5.4, chr-F: 0.325\ntestset: URL, BLEU: 18.8, chr-F: 0.418\ntestset: URL, BLEU: 16.8, chr-F: 0.569\ntestset: URL, BLEU: 27.3, chr-F: 0.571\ntestset: URL, BLEU: 7.6, chr-F: 0.327\ntestset: URL, BLEU: 30.5, chr-F: 0.559\ntestset: URL, BLEU: 14.2, chr-F: 0.370\ntestset: URL, BLEU: 35.6, chr-F: 0.558\ntestset: URL, BLEU: 38.0, chr-F: 0.587\ntestset: URL, BLEU: 25.5, chr-F: 0.510\ntestset: URL, BLEU: 5.5, chr-F: 0.058\ntestset: URL, BLEU: 32.0, chr-F: 0.557\ntestset: URL, BLEU: 26.8, chr-F: 0.493\ntestset: URL, BLEU: 48.7, chr-F: 0.686\ntestset: URL, BLEU: 43.4, chr-F: 0.612\ntestset: URL, BLEU: 27.5, chr-F: 0.500\ntestset: URL, BLEU: 9.3, chr-F: 0.293\ntestset: URL, BLEU: 2.2, chr-F: 0.183\ntestset: URL, BLEU: 1.3, chr-F: 0.179\ntestset: URL, BLEU: 2.3, chr-F: 0.183\ntestset: URL, BLEU: 0.5, chr-F: 0.173\ntestset: URL, BLEU: 3.4, chr-F: 0.200\ntestset: URL, BLEU: 1.6, chr-F: 0.166\ntestset: URL, BLEU: 8.3, chr-F: 0.311\ntestset: URL, BLEU: 9.5, chr-F: 0.361\ntestset: URL, BLEU: 8.8, chr-F: 0.415\ntestset: URL, BLEU: 21.4, chr-F: 0.347\ntestset: URL, BLEU: 13.3, chr-F: 0.434\ntestset: URL, BLEU: 2.9, chr-F: 0.204\ntestset: URL, BLEU: 5.3, chr-F: 0.243\ntestset: URL, BLEU: 6.5, chr-F: 0.194\ntestset: URL, BLEU: 30.2, chr-F: 0.667\ntestset: URL, BLEU: 35.4, chr-F: 0.493\ntestset: URL, BLEU: 23.6, chr-F: 0.542\ntestset: URL, BLEU: 10.6, chr-F: 0.344\ntestset: URL, BLEU: 12.7, chr-F: 0.652\ntestset: URL, BLEU: 32.1, chr-F: 0.524\ntestset: URL, BLEU: 38.4, chr-F: 0.566\ntestset: URL, BLEU: 5.3, chr-F: 0.351\ntestset: URL, BLEU: 7.3, chr-F: 0.338\ntestset: URL, BLEU: 38.0, chr-F: 0.571\ntestset: URL, BLEU: 10.7, chr-F: 0.116\ntestset: URL, BLEU: 36.2, chr-F: 0.587\ntestset: URL, BLEU: 2.4, chr-F: 0.233\ntestset: URL, BLEU: 6.5, chr-F: 0.368\ntestset: URL, BLEU: 27.5, chr-F: 0.484\ntestset: URL, BLEU: 0.8, chr-F: 0.082\ntestset: URL, BLEU: 9.7, chr-F: 0.168\ntestset: URL, BLEU: 32.5, chr-F: 0.522\ntestset: URL, BLEU: 45.2, chr-F: 0.656\ntestset: URL, BLEU: 32.2, chr-F: 0.554\ntestset: URL, BLEU: 33.6, chr-F: 0.577\ntestset: URL, BLEU: 33.3, chr-F: 0.536\ntestset: URL, BLEU: 19.0, chr-F: 0.113\ntestset: URL, BLEU: 40.8, chr-F: 0.605\ntestset: URL, BLEU: 12.7, chr-F: 0.288\ntestset: URL, BLEU: 19.7, chr-F: 0.285\ntestset: URL, BLEU: 18.7, chr-F: 0.359\ntestset: URL, BLEU: 30.1, chr-F: 0.455\ntestset: URL, BLEU: 34.7, chr-F: 0.540\ntestset: URL, BLEU: 0.0, chr-F: 0.042\ntestset: URL, BLEU: 42.7, chr-F: 0.835\ntestset: URL, BLEU: 35.0, chr-F: 0.587\ntestset: URL, BLEU: 30.8, chr-F: 0.534\ntestset: URL, BLEU: 27.9, chr-F: 0.512\ntestset: URL, BLEU: 33.8, chr-F: 0.537\ntestset: URL, BLEU: 0.4, chr-F: 0.038\ntestset: URL, BLEU: 7.6, chr-F: 0.384\ntestset: URL, BLEU: 37.9, chr-F: 0.559\ntestset: URL, BLEU: 31.3, chr-F: 0.528\ntestset: URL, BLEU: 16.0, chr-F: 0.060\ntestset: URL, BLEU: 29.0, chr-F: 0.512\ntestset: URL, BLEU: 37.6, chr-F: 0.553\ntestset: URL, BLEU: 1.6, chr-F: 0.138\ntestset: URL, BLEU: 4.2, chr-F: 0.278\ntestset: URL, BLEU: 33.0, chr-F: 0.524\ntestset: URL, BLEU: 16.3, chr-F: 0.308\ntestset: URL, BLEU: 10.7, chr-F: 0.045\ntestset: URL, BLEU: 22.3, chr-F: 0.427\ntestset: URL, BLEU: 5.9, chr-F: 0.310\ntestset: URL, BLEU: 20.6, chr-F: 0.459\ntestset: URL, BLEU: 1.5, chr-F: 0.152\ntestset: URL, BLEU: 31.0, chr-F: 0.546\ntestset: URL, BLEU: 5.5, chr-F: 0.326\ntestset: URL, BLEU: 12.7, chr-F: 0.365\ntestset: URL, BLEU: 9.0, chr-F: 0.320\ntestset: URL, BLEU: 26.6, chr-F: 0.495\ntestset: URL, BLEU: 5.6, chr-F: 0.210\ntestset: URL, BLEU: 1.0, chr-F: 0.169\ntestset: URL, BLEU: 7.9, chr-F: 0.328\ntestset: URL, BLEU: 31.1, chr-F: 0.519\ntestset: URL, BLEU: 22.0, chr-F: 0.489\ntestset: URL, BLEU: 19.4, chr-F: 0.263\ntestset: URL, BLEU: 19.0, chr-F: 0.217\ntestset: URL, BLEU: 38.5, chr-F: 0.662\ntestset: URL, BLEU: 6.6, chr-F: 0.305\ntestset: URL, BLEU: 11.5, chr-F: 0.350\ntestset: URL, BLEU: 31.1, chr-F: 0.517\ntestset: URL, BLEU: 31.2, chr-F: 0.528\ntestset: URL, BLEU: 4.9, chr-F: 0.261\ntestset: URL, BLEU: 7.3, chr-F: 0.325\ntestset: URL, BLEU: 0.0, chr-F: 0.008\ntestset: URL, BLEU: 4.8, chr-F: 0.198\ntestset: URL, BLEU: 31.3, chr-F: 0.540\ntestset: URL, BLEU: 24.5, chr-F: 0.476\ntestset: URL, BLEU: 25.7, chr-F: 0.492\ntestset: URL, BLEU: 20.7, chr-F: 0.400\ntestset: URL, BLEU: 30.9, chr-F: 0.526\ntestset: URL, BLEU: 32.0, chr-F: 0.507\ntestset: URL, BLEU: 41.1, chr-F: 0.622\ntestset: URL, BLEU: 7.1, chr-F: 0.367\ntestset: URL, BLEU: 4.7, chr-F: 0.253\ntestset: URL, BLEU: 2.5, chr-F: 0.167\ntestset: URL, BLEU: 11.7, chr-F: 0.217\ntestset: URL, BLEU: 3.9, chr-F: 0.224\ntestset: URL, BLEU: 40.7, chr-F: 0.420\ntestset: URL, BLEU: 2.1, chr-F: 0.134\ntestset: URL, BLEU: 3.4, chr-F: 0.244\ntestset: URL, BLEU: 17.2, chr-F: 0.310\ntestset: URL, BLEU: 32.8, chr-F: 0.524\ntestset: URL, BLEU: 5.7, chr-F: 0.254\ntestset: URL, BLEU: 5.3, chr-F: 0.023\ntestset: URL, BLEU: 3.5, chr-F: 0.237\ntestset: URL, BLEU: 11.9, chr-F: 0.335\ntestset: URL, BLEU: 23.7, chr-F: 0.300\ntestset: URL, BLEU: 0.0, chr-F: 0.146\ntestset: URL, BLEU: 14.1, chr-F: 0.313\ntestset: URL, BLEU: 33.2, chr-F: 0.528\ntestset: URL, BLEU: 33.4, chr-F: 0.518\ntestset: URL, BLEU: 29.9, chr-F: 0.489\ntestset: URL, BLEU: 19.5, chr-F: 0.405\ntestset: URL, BLEU: 28.6, chr-F: 0.499\ntestset: URL, BLEU: 5.5, chr-F: 0.296\ntestset: URL, BLEU: 18.0, chr-F: 0.546\ntestset: URL, BLEU: 18.0, chr-F: 0.452\ntestset: URL, BLEU: 20.3, chr-F: 0.406\ntestset: URL, BLEU: 33.1, chr-F: 0.541\ntestset: URL, BLEU: 12.4, chr-F: 0.348\ntestset: URL, BLEU: 33.4, chr-F: 0.519\ntestset: URL, BLEU: 32.9, chr-F: 0.503\ntestset: URL, BLEU: 14.8, chr-F: 0.095\ntestset: URL, BLEU: 30.1, chr-F: 0.471\ntestset: URL, BLEU: 12.7, chr-F: 0.377\ntestset: URL, BLEU: 46.9, chr-F: 0.624\ntestset: URL, BLEU: 1.1, chr-F: 0.143\ntestset: URL, BLEU: 21.6, chr-F: 0.446\ntestset: URL, BLEU: 28.1, chr-F: 0.526\ntestset: URL, BLEU: 22.8, chr-F: 0.466\ntestset: URL, BLEU: 16.9, chr-F: 0.442\ntestset: URL, BLEU: 30.8, chr-F: 0.510\ntestset: URL, BLEU: 49.1, chr-F: 0.696\ntestset: URL, BLEU: 27.2, chr-F: 0.497\ntestset: URL, BLEU: 0.5, chr-F: 0.049\ntestset: URL, BLEU: 5.3, chr-F: 0.204\ntestset: URL, BLEU: 22.4, chr-F: 0.476\ntestset: URL, BLEU: 39.3, chr-F: 0.581\ntestset: URL, BLEU: 30.9, chr-F: 0.531\ntestset: URL, BLEU: 0.7, chr-F: 0.109\ntestset: URL, BLEU: 0.9, chr-F: 0.060\ntestset: URL, BLEU: 28.9, chr-F: 0.487\ntestset: URL, BLEU: 41.0, chr-F: 0.595\ntestset: URL, BLEU: 13.9, chr-F: 0.188\ntestset: URL, BLEU: 7.9, chr-F: 0.244\ntestset: URL, BLEU: 41.4, chr-F: 0.610\ntestset: URL, BLEU: 15.8, chr-F: 0.397\ntestset: URL, BLEU: 7.0, chr-F: 0.060\ntestset: URL, BLEU: 7.4, chr-F: 0.303\ntestset: URL, BLEU: 22.2, chr-F: 0.415\ntestset: URL, BLEU: 48.8, chr-F: 0.683\ntestset: URL, BLEU: 1.7, chr-F: 0.181\ntestset: URL, BLEU: 0.3, chr-F: 0.010\ntestset: URL, BLEU: 0.1, chr-F: 0.005\ntestset: URL, BLEU: 5.6, chr-F: 0.051\ntestset: URL, BLEU: 15.0, chr-F: 0.365\ntestset: URL, BLEU: 19.9, chr-F: 0.409\ntestset: URL, BLEU: 33.2, chr-F: 0.529\ntestset: URL, BLEU: 16.1, chr-F: 0.331\ntestset: URL, BLEU: 5.1, chr-F: 0.240\ntestset: URL, BLEU: 13.5, chr-F: 0.357\ntestset: URL, BLEU: 18.0, chr-F: 0.410\ntestset: URL, BLEU: 42.7, chr-F: 0.646\ntestset: URL, BLEU: 0.4, chr-F: 0.088\ntestset: URL, BLEU: 5.6, chr-F: 0.237\ntestset: URL, BLEU: 0.9, chr-F: 0.157\ntestset: URL, BLEU: 9.0, chr-F: 0.382\ntestset: URL, BLEU: 23.7, chr-F: 0.510\ntestset: URL, BLEU: 22.4, chr-F: 0.477\ntestset: URL, BLEU: 0.4, chr-F: 0.119\ntestset: URL, BLEU: 34.1, chr-F: 0.531\ntestset: URL, BLEU: 29.4, chr-F: 0.416\ntestset: URL, BLEU: 37.1, chr-F: 0.568\ntestset: URL, BLEU: 14.0, chr-F: 0.405\ntestset: URL, BLEU: 15.4, chr-F: 0.390\ntestset: URL, BLEU: 34.0, chr-F: 0.550\ntestset: URL, BLEU: 41.1, chr-F: 0.608\ntestset: URL, BLEU: 8.0, chr-F: 0.353\ntestset: URL, BLEU: 0.4, chr-F: 0.010\ntestset: URL, BLEU: 0.2, chr-F: 0.060\ntestset: URL, BLEU: 0.6, chr-F: 0.122\ntestset: URL, BLEU: 26.3, chr-F: 0.498\ntestset: URL, BLEU: 41.6, chr-F: 0.638\ntestset: URL, BLEU: 0.3, chr-F: 0.095\ntestset: URL, BLEU: 4.0, chr-F: 0.219\ntestset: URL, BLEU: 31.9, chr-F: 0.550\ntestset: URL, BLEU: 0.2, chr-F: 0.013\ntestset: URL, BLEU: 29.4, chr-F: 0.510\ntestset: URL, BLEU: 1.6, chr-F: 0.086\ntestset: URL, BLEU: 16.0, chr-F: 0.111\ntestset: URL, BLEU: 9.2, chr-F: 0.269\ntestset: URL, BLEU: 8.4, chr-F: 0.375\ntestset: URL, BLEU: 39.5, chr-F: 0.572\ntestset: URL, BLEU: 27.8, chr-F: 0.495\ntestset: URL, BLEU: 2.9, chr-F: 0.220\ntestset: URL, BLEU: 10.0, chr-F: 0.296\ntestset: URL, BLEU: 30.9, chr-F: 0.499\ntestset: URL, BLEU: 29.9, chr-F: 0.545\ntestset: URL, BLEU: 24.5, chr-F: 0.484\ntestset: URL, BLEU: 5.8, chr-F: 0.347\ntestset: URL, BLEU: 16.7, chr-F: 0.426\ntestset: URL, BLEU: 8.4, chr-F: 0.370\ntestset: URL, BLEU: 0.6, chr-F: 0.032\ntestset: URL, BLEU: 9.3, chr-F: 0.283\ntestset: URL, BLEU: 0.3, chr-F: 0.126\ntestset: URL, BLEU: 0.0, chr-F: 0.102\ntestset: URL, BLEU: 4.0, chr-F: 0.175\ntestset: URL, BLEU: 13.2, chr-F: 0.398\ntestset: URL, BLEU: 7.0, chr-F: 0.345\ntestset: URL, BLEU: 5.0, chr-F: 0.110\ntestset: URL, BLEU: 63.1, chr-F: 0.831\ntestset: URL, BLEU: 35.4, chr-F: 0.529\ntestset: URL, BLEU: 38.5, chr-F: 0.528\ntestset: URL, BLEU: 32.8, chr-F: 0.380\ntestset: URL, BLEU: 54.5, chr-F: 0.702\ntestset: URL, BLEU: 36.7, chr-F: 0.570\ntestset: URL, BLEU: 32.9, chr-F: 0.541\ntestset: URL, BLEU: 44.9, chr-F: 0.606\ntestset: URL, BLEU: 0.0, chr-F: 0.877\ntestset: URL, BLEU: 43.2, chr-F: 0.605\ntestset: URL, BLEU: 42.7, chr-F: 0.402\ntestset: URL, BLEU: 4.8, chr-F: 0.253\ntestset: URL, BLEU: 39.3, chr-F: 0.591\ntestset: URL, BLEU: 31.6, chr-F: 0.617\ntestset: URL, BLEU: 21.2, chr-F: 0.559\ntestset: URL, BLEU: 33.1, chr-F: 0.548\ntestset: URL, BLEU: 1.4, chr-F: 0.144\ntestset: URL, BLEU: 6.6, chr-F: 0.373\ntestset: URL, BLEU: 4.5, chr-F: 0.453\ntestset: URL, BLEU: 73.4, chr-F: 0.828\ntestset: URL, BLEU: 25.5, chr-F: 0.440\ntestset: URL, BLEU: 0.0, chr-F: 0.124\ntestset: URL, BLEU: 71.9, chr-F: 0.742\ntestset: URL, BLEU: 59.5, chr-F: 0.742\ntestset: URL, BLEU: 25.9, chr-F: 0.497\ntestset: URL, BLEU: 31.3, chr-F: 0.546\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 28.6, chr-F: 0.495\ntestset: URL, BLEU: 19.0, chr-F: 0.116\ntestset: URL, BLEU: 37.1, chr-F: 0.569\ntestset: URL, BLEU: 13.9, chr-F: 0.336\ntestset: URL, BLEU: 16.5, chr-F: 0.438\ntestset: URL, BLEU: 20.1, chr-F: 0.468\ntestset: URL, BLEU: 8.0, chr-F: 0.316\ntestset: URL, BLEU: 13.0, chr-F: 0.300\ntestset: URL, BLEU: 15.3, chr-F: 0.296\ntestset: URL, BLEU: 0.9, chr-F: 0.199\ntestset: URL, BLEU: 4.9, chr-F: 0.287\ntestset: URL, BLEU: 1.9, chr-F: 0.194\ntestset: URL, BLEU: 45.2, chr-F: 0.574\ntestset: URL, BLEU: 7.8, chr-F: 0.271\ntestset: URL, BLEU: 9.6, chr-F: 0.273\ntestset: URL, BLEU: 0.9, chr-F: 0.102\ntestset: URL, BLEU: 4.4, chr-F: 0.054\ntestset: URL, BLEU: 48.3, chr-F: 0.646\ntestset: URL, BLEU: 1.4, chr-F: 0.034\ntestset: URL, BLEU: 36.7, chr-F: 0.601\ntestset: URL, BLEU: 40.4, chr-F: 0.601\ntestset: URL, BLEU: 33.9, chr-F: 0.538\ntestset: URL, BLEU: 33.1, chr-F: 0.524\ntestset: URL, BLEU: 25.8, chr-F: 0.469\ntestset: URL, BLEU: 34.0, chr-F: 0.543\ntestset: URL, BLEU: 23.0, chr-F: 0.493\ntestset: URL, BLEU: 36.1, chr-F: 0.538\ntestset: URL, BLEU: 3.6, chr-F: 0.400\ntestset: URL, BLEU: 5.3, chr-F: 0.240\ntestset: URL, BLEU: 32.0, chr-F: 0.519\ntestset: URL, BLEU: 13.6, chr-F: 0.318\ntestset: URL, BLEU: 3.8, chr-F: 0.199\ntestset: URL, BLEU: 33.4, chr-F: 0.547\ntestset: URL, BLEU: 32.6, chr-F: 0.546\ntestset: URL, BLEU: 1.4, chr-F: 0.166\ntestset: URL, BLEU: 8.0, chr-F: 0.314\ntestset: URL, BLEU: 10.7, chr-F: 0.520\ntestset: URL, BLEU: 59.9, chr-F: 0.631\ntestset: URL, BLEU: 38.0, chr-F: 0.718\ntestset: URL, BLEU: 2.5, chr-F: 0.213\ntestset: URL, BLEU: 11.0, chr-F: 0.368\ntestset: URL, BLEU: 33.0, chr-F: 0.524\ntestset: URL, BLEU: 40.4, chr-F: 0.574\ntestset: URL, BLEU: 0.1, chr-F: 0.008\ntestset: URL, BLEU: 32.7, chr-F: 0.553\ntestset: URL, BLEU: 26.8, chr-F: 0.496\ntestset: URL, BLEU: 45.7, chr-F: 0.651\ntestset: URL, BLEU: 11.8, chr-F: 0.263\ntestset: URL, BLEU: 31.7, chr-F: 0.528\ntestset: URL, BLEU: 3.6, chr-F: 0.196\ntestset: URL, BLEU: 36.7, chr-F: 0.586\ntestset: URL, BLEU: 17.1, chr-F: 0.451\ntestset: URL, BLEU: 17.1, chr-F: 0.375\ntestset: URL, BLEU: 38.1, chr-F: 0.565\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 14.0, chr-F: 0.404\ntestset: URL, BLEU: 1.5, chr-F: 0.014\ntestset: URL, BLEU: 68.7, chr-F: 0.695\ntestset: URL, BLEU: 25.8, chr-F: 0.314\ntestset: URL, BLEU: 13.6, chr-F: 0.319\ntestset: URL, BLEU: 48.3, chr-F: 0.680\ntestset: URL, BLEU: 28.3, chr-F: 0.454\ntestset: URL, BLEU: 4.4, chr-F: 0.206\ntestset: URL, BLEU: 8.0, chr-F: 0.282\ntestset: URL, BLEU: 5.2, chr-F: 0.237\ntestset: URL, BLEU: 9.9, chr-F: 0.395\ntestset: URL, BLEU: 35.4, chr-F: 0.868\ntestset: URL, BLEU: 0.8, chr-F: 0.077\ntestset: URL, BLEU: 4.9, chr-F: 0.240\ntestset: URL, BLEU: 11.3, chr-F: 0.054\ntestset: URL, BLEU: 19.0, chr-F: 0.583\ntestset: URL, BLEU: 5.4, chr-F: 0.320\ntestset: URL, BLEU: 6.3, chr-F: 0.239\ntestset: URL, BLEU: 12.8, chr-F: 0.341\ntestset: URL, BLEU: 17.5, chr-F: 0.382\ntestset: URL, BLEU: 42.7, chr-F: 0.797\ntestset: URL, BLEU: 15.5, chr-F: 0.338\ntestset: URL, BLEU: 2.3, chr-F: 0.176\ntestset: URL, BLEU: 4.5, chr-F: 0.207\ntestset: URL, BLEU: 18.9, chr-F: 0.367\ntestset: URL, BLEU: 6.0, chr-F: 0.156\ntestset: URL, BLEU: 32.2, chr-F: 0.448\ntestset: URL, BLEU: 1.3, chr-F: 0.142\ntestset: URL, BLEU: 15.3, chr-F: 0.363\ntestset: URL, BLEU: 3.2, chr-F: 0.166\ntestset: URL, BLEU: 0.1, chr-F: 0.090\ntestset: URL, BLEU: 1.8, chr-F: 0.206\ntestset: URL, BLEU: 27.8, chr-F: 0.560\ntestset: URL, BLEU: 4.2, chr-F: 0.316\ntestset: URL, BLEU: 24.6, chr-F: 0.466\ntestset: URL, BLEU: 24.5, chr-F: 0.431\ntestset: URL, BLEU: 5.0, chr-F: 0.318\ntestset: URL, BLEU: 19.0, chr-F: 0.390\ntestset: URL, BLEU: 15.0, chr-F: 0.258\ntestset: URL, BLEU: 7.4, chr-F: 0.326\ntestset: URL, BLEU: 12.3, chr-F: 0.325\ntestset: URL, BLEU: 14.2, chr-F: 0.324\ntestset: URL, BLEU: 16.1, chr-F: 0.369\ntestset: URL, BLEU: 3.2, chr-F: 0.125\ntestset: URL, BLEU: 55.9, chr-F: 0.672\ntestset: URL, BLEU: 0.3, chr-F: 0.083\ntestset: URL, BLEU: 7.2, chr-F: 0.383\ntestset: URL, BLEU: 0.0, chr-F: 0.102\ntestset: URL, BLEU: 1.9, chr-F: 0.135",
"### System Info:\n\n\n* hf\\_name: ine-ine\n* source\\_languages: ine\n* target\\_languages: ine\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']\n* src\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* tgt\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ine\n* tgt\\_alpha3: ine\n* short\\_pair: ine-ine\n* chrF2\\_score: 0.509\n* bleu: 30.8\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 69953.0\n* src\\_name: Indo-European languages\n* tgt\\_name: Indo-European languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: ine\n* tgt\\_alpha2: ine\n* prefer\\_old: False\n* long\\_pair: ine-ine\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #os #ro #fy #cy #sc #is #yi #lb #an #sq #fr #ht #rm #ps #af #uk #sl #lt #bg #be #gd #si #en #br #mk #or #mr #ru #fo #co #oc #pl #gl #nb #bn #id #hy #da #gv #nl #pt #hi #as #kw #ga #sv #gu #wa #lv #el #it #hr #ur #nn #de #cs #ine #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ine-ine\n\n\n* source group: Indo-European languages\n* target group: Indo-European languages\n* OPUS readme: ine-ine\n* model: transformer\n* source language(s): afr afr\\_Arab aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bjn bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell eng enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye hye\\_Latn ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Grek lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pcd pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus rus\\_Latn san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srd srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* target language(s): afr afr\\_Arab aln ang\\_Latn arg asm ast awa bel bel\\_Latn ben bho bjn bos\\_Latn bre bul bul\\_Latn cat ces cor cos csb\\_Latn cym dan deu dsb egl ell eng enm\\_Latn ext fao fra frm\\_Latn frr fry gcf\\_Latn gla gle glg glv gom gos got\\_Goth grc\\_Grek gsw guj hat hif\\_Latn hin hrv hsb hye hye\\_Latn ind isl ita jdt\\_Cyrl ksh kur\\_Arab kur\\_Latn lad lad\\_Latn lat\\_Grek lat\\_Latn lav lij lit lld\\_Latn lmo ltg ltz mai mar max\\_Latn mfe min mkd mwl nds nld nno nob nob\\_Hebr non\\_Latn npi oci ori orv\\_Cyrl oss pan\\_Guru pap pcd pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por prg\\_Latn pus roh rom ron rue rus rus\\_Latn san\\_Deva scn sco sgs sin slv snd\\_Arab spa sqi srd srp\\_Cyrl srp\\_Latn stq swe swg tgk\\_Cyrl tly\\_Latn tmw\\_Latn ukr urd vec wln yid zlm\\_Latn zsm\\_Latn zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: euelections\\_dev2019.URL, BLEU: 19.2, chr-F: 0.482\ntestset: euelections\\_dev2019.URL, BLEU: 15.8, chr-F: 0.470\ntestset: URL, BLEU: 4.0, chr-F: 0.245\ntestset: URL, BLEU: 6.8, chr-F: 0.301\ntestset: URL, BLEU: 17.3, chr-F: 0.470\ntestset: URL, BLEU: 26.0, chr-F: 0.534\ntestset: URL, BLEU: 12.1, chr-F: 0.416\ntestset: URL, BLEU: 15.9, chr-F: 0.443\ntestset: URL, BLEU: 2.5, chr-F: 0.200\ntestset: URL, BLEU: 7.1, chr-F: 0.302\ntestset: URL, BLEU: 10.6, chr-F: 0.407\ntestset: URL, BLEU: 14.9, chr-F: 0.428\ntestset: URL, BLEU: 22.6, chr-F: 0.507\ntestset: URL, BLEU: 23.5, chr-F: 0.495\ntestset: URL, BLEU: 25.1, chr-F: 0.528\ntestset: URL, BLEU: 26.4, chr-F: 0.517\ntestset: URL, BLEU: 13.1, chr-F: 0.432\ntestset: URL, BLEU: 18.4, chr-F: 0.463\ntestset: URL, BLEU: 15.5, chr-F: 0.452\ntestset: URL, BLEU: 14.8, chr-F: 0.458\ntestset: URL, BLEU: 18.4, chr-F: 0.462\ntestset: URL, BLEU: 10.5, chr-F: 0.381\ntestset: URL, BLEU: 19.5, chr-F: 0.467\ntestset: URL, BLEU: 16.4, chr-F: 0.459\ntestset: URL, BLEU: 15.5, chr-F: 0.456\ntestset: URL, BLEU: 18.4, chr-F: 0.466\ntestset: URL, BLEU: 11.9, chr-F: 0.394\ntestset: URL, BLEU: 13.9, chr-F: 0.446\ntestset: URL, BLEU: 20.7, chr-F: 0.502\ntestset: URL, BLEU: 21.3, chr-F: 0.516\ntestset: URL, BLEU: 22.3, chr-F: 0.506\ntestset: URL, BLEU: 11.5, chr-F: 0.390\ntestset: URL, BLEU: 13.4, chr-F: 0.437\ntestset: URL, BLEU: 22.8, chr-F: 0.499\ntestset: URL, BLEU: 22.2, chr-F: 0.533\ntestset: URL, BLEU: 26.2, chr-F: 0.539\ntestset: URL, BLEU: 12.3, chr-F: 0.397\ntestset: URL, BLEU: 13.3, chr-F: 0.436\ntestset: URL, BLEU: 24.7, chr-F: 0.517\ntestset: URL, BLEU: 24.0, chr-F: 0.528\ntestset: URL, BLEU: 26.3, chr-F: 0.537\ntestset: URL, BLEU: 12.0, chr-F: 0.400\ntestset: URL, BLEU: 13.9, chr-F: 0.440\ntestset: URL, BLEU: 22.9, chr-F: 0.509\ntestset: URL, BLEU: 24.2, chr-F: 0.538\ntestset: URL, BLEU: 24.5, chr-F: 0.547\ntestset: URL, BLEU: 12.0, chr-F: 0.422\ntestset: URL, BLEU: 15.1, chr-F: 0.444\ntestset: URL, BLEU: 16.4, chr-F: 0.451\ntestset: URL, BLEU: 9.9, chr-F: 0.369\ntestset: URL, BLEU: 18.0, chr-F: 0.456\ntestset: URL, BLEU: 16.4, chr-F: 0.453\ntestset: URL, BLEU: 17.0, chr-F: 0.452\ntestset: URL, BLEU: 10.5, chr-F: 0.375\ntestset: URL, BLEU: 14.5, chr-F: 0.439\ntestset: URL, BLEU: 18.9, chr-F: 0.481\ntestset: URL, BLEU: 20.9, chr-F: 0.491\ntestset: URL, BLEU: 10.7, chr-F: 0.380\ntestset: URL, BLEU: 13.8, chr-F: 0.435\ntestset: URL, BLEU: 19.8, chr-F: 0.479\ntestset: URL, BLEU: 24.8, chr-F: 0.522\ntestset: URL, BLEU: 11.0, chr-F: 0.380\ntestset: URL, BLEU: 14.0, chr-F: 0.433\ntestset: URL, BLEU: 20.6, chr-F: 0.488\ntestset: URL, BLEU: 23.3, chr-F: 0.518\ntestset: URL, BLEU: 12.9, chr-F: 0.427\ntestset: URL, BLEU: 17.0, chr-F: 0.456\ntestset: URL, BLEU: 15.4, chr-F: 0.447\ntestset: URL, BLEU: 14.9, chr-F: 0.454\ntestset: URL, BLEU: 17.1, chr-F: 0.458\ntestset: URL, BLEU: 10.3, chr-F: 0.370\ntestset: URL, BLEU: 17.7, chr-F: 0.458\ntestset: URL, BLEU: 15.9, chr-F: 0.447\ntestset: URL, BLEU: 14.7, chr-F: 0.446\ntestset: URL, BLEU: 17.2, chr-F: 0.453\ntestset: URL, BLEU: 11.0, chr-F: 0.387\ntestset: URL, BLEU: 13.6, chr-F: 0.440\ntestset: URL, BLEU: 20.3, chr-F: 0.496\ntestset: URL, BLEU: 20.8, chr-F: 0.509\ntestset: URL, BLEU: 21.9, chr-F: 0.503\ntestset: URL, BLEU: 11.3, chr-F: 0.385\ntestset: URL, BLEU: 14.0, chr-F: 0.436\ntestset: URL, BLEU: 21.8, chr-F: 0.496\ntestset: URL, BLEU: 22.1, chr-F: 0.526\ntestset: URL, BLEU: 24.8, chr-F: 0.525\ntestset: URL, BLEU: 11.5, chr-F: 0.382\ntestset: URL, BLEU: 13.3, chr-F: 0.430\ntestset: URL, BLEU: 23.6, chr-F: 0.508\ntestset: URL, BLEU: 22.9, chr-F: 0.516\ntestset: URL, BLEU: 25.4, chr-F: 0.529\ntestset: URL, BLEU: 11.3, chr-F: 0.386\ntestset: URL, BLEU: 13.5, chr-F: 0.434\ntestset: URL, BLEU: 22.4, chr-F: 0.500\ntestset: URL, BLEU: 23.2, chr-F: 0.520\ntestset: URL, BLEU: 24.0, chr-F: 0.538\ntestset: URL, BLEU: 13.1, chr-F: 0.431\ntestset: URL, BLEU: 16.9, chr-F: 0.459\ntestset: URL, BLEU: 15.6, chr-F: 0.450\ntestset: URL, BLEU: 18.5, chr-F: 0.467\ntestset: URL, BLEU: 11.4, chr-F: 0.387\ntestset: URL, BLEU: 19.6, chr-F: 0.481\ntestset: URL, BLEU: 17.7, chr-F: 0.471\ntestset: URL, BLEU: 20.0, chr-F: 0.478\ntestset: URL, BLEU: 11.4, chr-F: 0.393\ntestset: URL, BLEU: 15.1, chr-F: 0.448\ntestset: URL, BLEU: 21.4, chr-F: 0.506\ntestset: URL, BLEU: 25.0, chr-F: 0.525\ntestset: URL, BLEU: 11.1, chr-F: 0.386\ntestset: URL, BLEU: 14.2, chr-F: 0.442\ntestset: URL, BLEU: 22.6, chr-F: 0.507\ntestset: URL, BLEU: 26.6, chr-F: 0.542\ntestset: URL, BLEU: 12.2, chr-F: 0.396\ntestset: URL, BLEU: 15.1, chr-F: 0.445\ntestset: URL, BLEU: 24.3, chr-F: 0.521\ntestset: URL, BLEU: 24.8, chr-F: 0.536\ntestset: URL, BLEU: 13.1, chr-F: 0.423\ntestset: URL, BLEU: 18.2, chr-F: 0.463\ntestset: URL, BLEU: 17.4, chr-F: 0.458\ntestset: URL, BLEU: 18.9, chr-F: 0.464\ntestset: URL, BLEU: 11.2, chr-F: 0.376\ntestset: URL, BLEU: 18.3, chr-F: 0.464\ntestset: URL, BLEU: 17.0, chr-F: 0.457\ntestset: URL, BLEU: 19.2, chr-F: 0.464\ntestset: URL, BLEU: 12.4, chr-F: 0.395\ntestset: URL, BLEU: 14.5, chr-F: 0.437\ntestset: URL, BLEU: 23.6, chr-F: 0.522\ntestset: URL, BLEU: 26.6, chr-F: 0.530\ntestset: URL, BLEU: 12.5, chr-F: 0.394\ntestset: URL, BLEU: 14.2, chr-F: 0.433\ntestset: URL, BLEU: 24.3, chr-F: 0.521\ntestset: URL, BLEU: 29.1, chr-F: 0.551\ntestset: URL, BLEU: 12.3, chr-F: 0.390\ntestset: URL, BLEU: 14.4, chr-F: 0.435\ntestset: URL, BLEU: 25.0, chr-F: 0.521\ntestset: URL, BLEU: 25.6, chr-F: 0.537\ntestset: URL, BLEU: 13.1, chr-F: 0.420\ntestset: URL, BLEU: 17.5, chr-F: 0.457\ntestset: URL, BLEU: 16.8, chr-F: 0.452\ntestset: URL, BLEU: 11.2, chr-F: 0.379\ntestset: URL, BLEU: 18.1, chr-F: 0.457\ntestset: URL, BLEU: 11.2, chr-F: 0.368\ntestset: URL, BLEU: 19.4, chr-F: 0.472\ntestset: URL, BLEU: 17.7, chr-F: 0.464\ntestset: URL, BLEU: 10.3, chr-F: 0.370\ntestset: URL, BLEU: 19.6, chr-F: 0.467\ntestset: URL, BLEU: 11.1, chr-F: 0.375\ntestset: URL, BLEU: 14.6, chr-F: 0.440\ntestset: URL, BLEU: 22.4, chr-F: 0.512\ntestset: URL, BLEU: 17.6, chr-F: 0.452\ntestset: URL, BLEU: 26.5, chr-F: 0.527\ntestset: URL, BLEU: 11.9, chr-F: 0.383\ntestset: URL, BLEU: 14.6, chr-F: 0.437\ntestset: URL, BLEU: 24.3, chr-F: 0.516\ntestset: URL, BLEU: 11.9, chr-F: 0.393\ntestset: URL, BLEU: 28.3, chr-F: 0.545\ntestset: URL, BLEU: 9.0, chr-F: 0.340\ntestset: URL, BLEU: 10.0, chr-F: 0.383\ntestset: URL, BLEU: 22.4, chr-F: 0.492\ntestset: URL, BLEU: 13.3, chr-F: 0.427\ntestset: URL, BLEU: 16.6, chr-F: 0.437\ntestset: URL, BLEU: 11.9, chr-F: 0.381\ntestset: URL, BLEU: 14.8, chr-F: 0.440\ntestset: URL, BLEU: 26.5, chr-F: 0.534\ntestset: URL, BLEU: 25.0, chr-F: 0.539\ntestset: URL, BLEU: 12.4, chr-F: 0.401\ntestset: URL, BLEU: 14.3, chr-F: 0.434\ntestset: URL, BLEU: 18.5, chr-F: 0.463\ntestset: URL, BLEU: 16.6, chr-F: 0.444\ntestset: URL, BLEU: 13.6, chr-F: 0.406\ntestset: URL, BLEU: 18.2, chr-F: 0.455\ntestset: URL, BLEU: 11.7, chr-F: 0.380\ntestset: URL, BLEU: 20.9, chr-F: 0.481\ntestset: URL, BLEU: 18.1, chr-F: 0.460\ntestset: URL, BLEU: 11.7, chr-F: 0.384\ntestset: URL, BLEU: 19.4, chr-F: 0.463\ntestset: URL, BLEU: 12.7, chr-F: 0.394\ntestset: URL, BLEU: 16.7, chr-F: 0.455\ntestset: URL, BLEU: 22.7, chr-F: 0.499\ntestset: URL, BLEU: 13.3, chr-F: 0.408\ntestset: URL, BLEU: 23.6, chr-F: 0.506\ntestset: URL, BLEU: 11.8, chr-F: 0.379\ntestset: URL, BLEU: 15.6, chr-F: 0.446\ntestset: URL, BLEU: 23.6, chr-F: 0.506\ntestset: URL, BLEU: 12.9, chr-F: 0.399\ntestset: URL, BLEU: 25.3, chr-F: 0.519\ntestset: URL, BLEU: 11.6, chr-F: 0.376\ntestset: URL, BLEU: 12.4, chr-F: 0.410\ntestset: URL, BLEU: 17.8, chr-F: 0.448\ntestset: URL, BLEU: 14.8, chr-F: 0.434\ntestset: URL, BLEU: 17.9, chr-F: 0.446\ntestset: URL, BLEU: 12.5, chr-F: 0.391\ntestset: URL, BLEU: 15.9, chr-F: 0.449\ntestset: URL, BLEU: 24.0, chr-F: 0.518\ntestset: URL, BLEU: 24.3, chr-F: 0.522\ntestset: URL, BLEU: 13.9, chr-F: 0.411\ntestset: URL, BLEU: 19.0, chr-F: 0.475\ntestset: URL, BLEU: 19.2, chr-F: 0.468\ntestset: URL, BLEU: 23.9, chr-F: 0.521\ntestset: URL, BLEU: 5.9, chr-F: 0.268\ntestset: URL, BLEU: 8.8, chr-F: 0.348\ntestset: URL, BLEU: 19.1, chr-F: 0.475\ntestset: URL, BLEU: 17.9, chr-F: 0.450\ntestset: URL, BLEU: 12.1, chr-F: 0.392\ntestset: URL, BLEU: 21.1, chr-F: 0.480\ntestset: URL, BLEU: 18.7, chr-F: 0.475\ntestset: URL, BLEU: 15.4, chr-F: 0.431\ntestset: URL, BLEU: 18.1, chr-F: 0.454\ntestset: URL, BLEU: 18.6, chr-F: 0.465\ntestset: URL, BLEU: 13.3, chr-F: 0.403\ntestset: URL, BLEU: 24.0, chr-F: 0.508\ntestset: URL, BLEU: 21.4, chr-F: 0.494\ntestset: URL, BLEU: 16.8, chr-F: 0.457\ntestset: URL, BLEU: 24.9, chr-F: 0.522\ntestset: URL, BLEU: 13.7, chr-F: 0.417\ntestset: URL, BLEU: 17.3, chr-F: 0.453\ntestset: URL, BLEU: 16.7, chr-F: 0.444\ntestset: URL, BLEU: 10.9, chr-F: 0.375\ntestset: URL, BLEU: 21.5, chr-F: 0.484\ntestset: URL, BLEU: 17.5, chr-F: 0.464\ntestset: URL, BLEU: 9.1, chr-F: 0.388\ntestset: URL, BLEU: 11.5, chr-F: 0.404\ntestset: URL, BLEU: 14.8, chr-F: 0.432\ntestset: URL, BLEU: 19.3, chr-F: 0.467\ntestset: URL, BLEU: 17.1, chr-F: 0.450\ntestset: URL, BLEU: 10.9, chr-F: 0.380\ntestset: URL, BLEU: 26.0, chr-F: 0.518\ntestset: URL, BLEU: 24.3, chr-F: 0.514\ntestset: URL, BLEU: 12.5, chr-F: 0.417\ntestset: URL, BLEU: 16.4, chr-F: 0.443\ntestset: URL, BLEU: 13.9, chr-F: 0.432\ntestset: URL, BLEU: 11.7, chr-F: 0.383\ntestset: URL, BLEU: 22.2, chr-F: 0.483\ntestset: URL, BLEU: 20.1, chr-F: 0.496\ntestset: URL, BLEU: 12.3, chr-F: 0.389\ntestset: URL, BLEU: 22.0, chr-F: 0.497\ntestset: URL, BLEU: 3.1, chr-F: 0.208\ntestset: URL, BLEU: 7.8, chr-F: 0.369\ntestset: URL, BLEU: 14.6, chr-F: 0.408\ntestset: URL, BLEU: 16.4, chr-F: 0.483\ntestset: URL, BLEU: 6.1, chr-F: 0.288\ntestset: URL, BLEU: 16.9, chr-F: 0.456\ntestset: URL, BLEU: 20.2, chr-F: 0.468\ntestset: URL, BLEU: 16.0, chr-F: 0.152\ntestset: URL, BLEU: 10.2, chr-F: 0.333\ntestset: URL, BLEU: 32.6, chr-F: 0.651\ntestset: URL, BLEU: 34.5, chr-F: 0.556\ntestset: URL, BLEU: 48.1, chr-F: 0.638\ntestset: URL, BLEU: 10.2, chr-F: 0.416\ntestset: URL, BLEU: 41.9, chr-F: 0.612\ntestset: URL, BLEU: 0.0, chr-F: 0.112\ntestset: URL, BLEU: 0.3, chr-F: 0.068\ntestset: URL, BLEU: 12.2, chr-F: 0.419\ntestset: URL, BLEU: 48.7, chr-F: 0.637\ntestset: URL, BLEU: 8.4, chr-F: 0.407\ntestset: URL, BLEU: 19.0, chr-F: 0.357\ntestset: URL, BLEU: 0.0, chr-F: 0.238\ntestset: URL, BLEU: 1.4, chr-F: 0.080\ntestset: URL, BLEU: 45.7, chr-F: 0.643\ntestset: URL, BLEU: 55.3, chr-F: 0.687\ntestset: URL, BLEU: 39.3, chr-F: 0.563\ntestset: URL, BLEU: 33.9, chr-F: 0.586\ntestset: URL, BLEU: 22.6, chr-F: 0.475\ntestset: URL, BLEU: 32.1, chr-F: 0.525\ntestset: URL, BLEU: 44.1, chr-F: 0.611\ntestset: URL, BLEU: 71.6, chr-F: 0.814\ntestset: URL, BLEU: 31.0, chr-F: 0.481\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 0.0, chr-F: 0.133\ntestset: URL, BLEU: 5.5, chr-F: 0.129\ntestset: URL, BLEU: 22.2, chr-F: 0.345\ntestset: URL, BLEU: 6.3, chr-F: 0.251\ntestset: URL, BLEU: 7.9, chr-F: 0.255\ntestset: URL, BLEU: 0.8, chr-F: 0.133\ntestset: URL, BLEU: 16.0, chr-F: 0.086\ntestset: URL, BLEU: 6.0, chr-F: 0.185\ntestset: URL, BLEU: 0.6, chr-F: 0.000\ntestset: URL, BLEU: 16.0, chr-F: 0.102\ntestset: URL, BLEU: 13.2, chr-F: 0.301\ntestset: URL, BLEU: 7.6, chr-F: 0.062\ntestset: URL, BLEU: 0.2, chr-F: 0.025\ntestset: URL, BLEU: 6.6, chr-F: 0.198\ntestset: URL, BLEU: 5.5, chr-F: 0.121\ntestset: URL, BLEU: 11.4, chr-F: 0.498\ntestset: URL, BLEU: 2.4, chr-F: 0.103\ntestset: URL, BLEU: 8.1, chr-F: 0.249\ntestset: URL, BLEU: 16.4, chr-F: 0.195\ntestset: URL, BLEU: 1.1, chr-F: 0.117\ntestset: URL, BLEU: 28.2, chr-F: 0.394\ntestset: URL, BLEU: 39.8, chr-F: 0.445\ntestset: URL, BLEU: 52.3, chr-F: 0.608\ntestset: URL, BLEU: 8.6, chr-F: 0.261\ntestset: URL, BLEU: 19.2, chr-F: 0.629\ntestset: URL, BLEU: 18.2, chr-F: 0.369\ntestset: URL, BLEU: 4.3, chr-F: 0.145\ntestset: URL, BLEU: 4.5, chr-F: 0.366\ntestset: URL, BLEU: 12.1, chr-F: 0.310\ntestset: URL, BLEU: 8.1, chr-F: 0.050\ntestset: URL, BLEU: 30.1, chr-F: 0.463\ntestset: URL, BLEU: 27.6, chr-F: 0.441\ntestset: URL, BLEU: 29.4, chr-F: 0.501\ntestset: URL, BLEU: 2.6, chr-F: 0.030\ntestset: URL, BLEU: 10.0, chr-F: 0.280\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 35.9, chr-F: 0.682\ntestset: URL, BLEU: 41.7, chr-F: 0.601\ntestset: URL, BLEU: 2.4, chr-F: 0.201\ntestset: URL, BLEU: 53.7, chr-F: 0.808\ntestset: URL, BLEU: 27.6, chr-F: 0.483\ntestset: URL, BLEU: 32.6, chr-F: 0.449\ntestset: URL, BLEU: 29.1, chr-F: 0.506\ntestset: URL, BLEU: 29.5, chr-F: 0.522\ntestset: URL, BLEU: 31.8, chr-F: 0.512\ntestset: URL, BLEU: 30.9, chr-F: 0.527\ntestset: URL, BLEU: 39.3, chr-F: 0.608\ntestset: URL, BLEU: 32.8, chr-F: 0.540\ntestset: URL, BLEU: 12.7, chr-F: 0.178\ntestset: URL, BLEU: 4.5, chr-F: 0.185\ntestset: URL, BLEU: 3.7, chr-F: 0.251\ntestset: URL, BLEU: 19.3, chr-F: 0.531\ntestset: URL, BLEU: 1.0, chr-F: 0.147\ntestset: URL, BLEU: 27.1, chr-F: 0.481\ntestset: URL, BLEU: 37.0, chr-F: 0.494\ntestset: URL, BLEU: 34.8, chr-F: 0.565\ntestset: URL, BLEU: 21.7, chr-F: 0.401\ntestset: URL, BLEU: 42.3, chr-F: 0.643\ntestset: URL, BLEU: 28.2, chr-F: 0.534\ntestset: URL, BLEU: 41.6, chr-F: 0.643\ntestset: URL, BLEU: 2.9, chr-F: 0.254\ntestset: URL, BLEU: 34.6, chr-F: 0.408\ntestset: URL, BLEU: 26.5, chr-F: 0.430\ntestset: URL, BLEU: 21.6, chr-F: 0.466\ntestset: URL, BLEU: 26.8, chr-F: 0.424\ntestset: URL, BLEU: 28.9, chr-F: 0.473\ntestset: URL, BLEU: 21.0, chr-F: 0.384\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 2.2, chr-F: 0.178\ntestset: URL, BLEU: 7.7, chr-F: 0.296\ntestset: URL, BLEU: 13.6, chr-F: 0.309\ntestset: URL, BLEU: 8.6, chr-F: 0.251\ntestset: URL, BLEU: 12.2, chr-F: 0.272\ntestset: URL, BLEU: 0.9, chr-F: 0.081\ntestset: URL, BLEU: 3.0, chr-F: 0.217\ntestset: URL, BLEU: 1.4, chr-F: 0.158\ntestset: URL, BLEU: 14.1, chr-F: 0.582\ntestset: URL, BLEU: 52.8, chr-F: 0.725\ntestset: URL, BLEU: 66.9, chr-F: 0.951\ntestset: URL, BLEU: 31.2, chr-F: 0.530\ntestset: URL, BLEU: 29.1, chr-F: 0.497\ntestset: URL, BLEU: 36.5, chr-F: 0.547\ntestset: URL, BLEU: 5.3, chr-F: 0.299\ntestset: URL, BLEU: 8.9, chr-F: 0.511\ntestset: URL, BLEU: 36.1, chr-F: 0.558\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 24.5, chr-F: 0.479\ntestset: URL, BLEU: 8.1, chr-F: 0.302\ntestset: URL, BLEU: 13.4, chr-F: 0.337\ntestset: URL, BLEU: 38.2, chr-F: 0.811\ntestset: URL, BLEU: 15.0, chr-F: 0.431\ntestset: URL, BLEU: 31.8, chr-F: 0.505\ntestset: URL, BLEU: 66.9, chr-F: 0.951\ntestset: URL, BLEU: 24.4, chr-F: 0.461\ntestset: URL, BLEU: 29.2, chr-F: 0.484\ntestset: URL, BLEU: 42.7, chr-F: 0.776\ntestset: URL, BLEU: 28.7, chr-F: 0.522\ntestset: URL, BLEU: 32.1, chr-F: 0.520\ntestset: URL, BLEU: 66.9, chr-F: 0.611\ntestset: URL, BLEU: 34.3, chr-F: 0.567\ntestset: URL, BLEU: 13.7, chr-F: 0.163\ntestset: URL, BLEU: 31.0, chr-F: 0.523\ntestset: URL, BLEU: 17.0, chr-F: 0.423\ntestset: URL, BLEU: 39.4, chr-F: 0.582\ntestset: URL, BLEU: 5.3, chr-F: 0.370\ntestset: URL, BLEU: 16.0, chr-F: 0.301\ntestset: URL, BLEU: 41.0, chr-F: 0.606\ntestset: URL, BLEU: 39.8, chr-F: 0.626\ntestset: URL, BLEU: 35.9, chr-F: 0.555\ntestset: URL, BLEU: 23.0, chr-F: 0.456\ntestset: URL, BLEU: 38.9, chr-F: 0.618\ntestset: URL, BLEU: 16.0, chr-F: 0.311\ntestset: URL, BLEU: 28.8, chr-F: 0.507\ntestset: URL, BLEU: 55.2, chr-F: 0.731\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 30.8, chr-F: 0.512\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 17.0, chr-F: 0.426\ntestset: URL, BLEU: 3.3, chr-F: 0.165\ntestset: URL, BLEU: 23.3, chr-F: 0.466\ntestset: URL, BLEU: 0.7, chr-F: 0.126\ntestset: URL, BLEU: 45.2, chr-F: 0.690\ntestset: URL, BLEU: 3.4, chr-F: 0.072\ntestset: URL, BLEU: 12.7, chr-F: 0.706\ntestset: URL, BLEU: 32.2, chr-F: 0.526\ntestset: URL, BLEU: 24.4, chr-F: 0.422\ntestset: URL, BLEU: 33.8, chr-F: 0.529\ntestset: URL, BLEU: 1.7, chr-F: 0.157\ntestset: URL, BLEU: 3.7, chr-F: 0.252\ntestset: URL, BLEU: 20.1, chr-F: 0.229\ntestset: URL, BLEU: 36.9, chr-F: 0.564\ntestset: URL, BLEU: 7.7, chr-F: 0.338\ntestset: URL, BLEU: 0.6, chr-F: 0.011\ntestset: URL, BLEU: 39.7, chr-F: 0.580\ntestset: URL, BLEU: 7.0, chr-F: 0.230\ntestset: URL, BLEU: 28.2, chr-F: 0.516\ntestset: URL, BLEU: 1.7, chr-F: 0.303\ntestset: URL, BLEU: 6.5, chr-F: 0.304\ntestset: URL, BLEU: 6.6, chr-F: 0.202\ntestset: URL, BLEU: 31.4, chr-F: 0.586\ntestset: URL, BLEU: 6.4, chr-F: 0.312\ntestset: URL, BLEU: 19.9, chr-F: 0.468\ntestset: URL, BLEU: 35.1, chr-F: 0.535\ntestset: URL, BLEU: 41.7, chr-F: 0.610\ntestset: URL, BLEU: 30.5, chr-F: 0.530\ntestset: URL, BLEU: 33.0, chr-F: 0.533\ntestset: URL, BLEU: 9.9, chr-F: 0.406\ntestset: URL, BLEU: 36.9, chr-F: 0.564\ntestset: URL, BLEU: 4.1, chr-F: 0.236\ntestset: URL, BLEU: 33.3, chr-F: 0.531\ntestset: URL, BLEU: 51.4, chr-F: 0.586\ntestset: URL, BLEU: 4.8, chr-F: 0.118\ntestset: URL, BLEU: 34.6, chr-F: 0.522\ntestset: URL, BLEU: 2.1, chr-F: 0.252\ntestset: URL, BLEU: 8.9, chr-F: 0.233\ntestset: URL, BLEU: 6.7, chr-F: 0.205\ntestset: URL, BLEU: 4.8, chr-F: 0.211\ntestset: URL, BLEU: 3.4, chr-F: 0.182\ntestset: URL, BLEU: 4.4, chr-F: 0.193\ntestset: URL, BLEU: 5.0, chr-F: 0.221\ntestset: URL, BLEU: 6.6, chr-F: 0.211\ntestset: URL, BLEU: 9.3, chr-F: 0.221\ntestset: URL, BLEU: 19.6, chr-F: 0.282\ntestset: URL, BLEU: 2.9, chr-F: 0.171\ntestset: URL, BLEU: 4.3, chr-F: 0.187\ntestset: URL, BLEU: 2.4, chr-F: 0.154\ntestset: URL, BLEU: 3.6, chr-F: 0.187\ntestset: URL, BLEU: 0.0, chr-F: 0.877\ntestset: URL, BLEU: 39.2, chr-F: 0.473\ntestset: URL, BLEU: 19.0, chr-F: 0.352\ntestset: URL, BLEU: 1.6, chr-F: 0.066\ntestset: URL, BLEU: 17.5, chr-F: 0.336\ntestset: URL, BLEU: 14.0, chr-F: 0.347\ntestset: URL, BLEU: 3.8, chr-F: 0.278\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 0.0, chr-F: 0.014\ntestset: URL, BLEU: 32.6, chr-F: 0.507\ntestset: URL, BLEU: 33.1, chr-F: 0.496\ntestset: URL, BLEU: 27.0, chr-F: 0.447\ntestset: URL, BLEU: 5.7, chr-F: 0.223\ntestset: URL, BLEU: 13.1, chr-F: 0.380\ntestset: URL, BLEU: 5.3, chr-F: 0.186\ntestset: URL, BLEU: 28.3, chr-F: 0.498\ntestset: URL, BLEU: 3.7, chr-F: 0.185\ntestset: URL, BLEU: 8.0, chr-F: 0.067\ntestset: URL, BLEU: 37.5, chr-F: 0.603\ntestset: URL, BLEU: 37.8, chr-F: 0.488\ntestset: URL, BLEU: 32.1, chr-F: 0.480\ntestset: URL, BLEU: 31.6, chr-F: 0.523\ntestset: URL, BLEU: 4.8, chr-F: 0.072\ntestset: URL, BLEU: 40.5, chr-F: 0.774\ntestset: URL, BLEU: 1.2, chr-F: 0.066\ntestset: URL, BLEU: 13.1, chr-F: 0.156\ntestset: URL, BLEU: 27.2, chr-F: 0.746\ntestset: URL, BLEU: 35.4, chr-F: 0.529\ntestset: URL, BLEU: 19.0, chr-F: 0.349\ntestset: URL, BLEU: 35.8, chr-F: 0.582\ntestset: URL, BLEU: 19.0, chr-F: 0.337\ntestset: URL, BLEU: 43.4, chr-F: 0.609\ntestset: URL, BLEU: 18.1, chr-F: 0.515\ntestset: URL, BLEU: 9.7, chr-F: 0.162\ntestset: URL, BLEU: 14.1, chr-F: 0.410\ntestset: URL, BLEU: 47.0, chr-F: 0.640\ntestset: URL, BLEU: 2.6, chr-F: 0.195\ntestset: URL, BLEU: 12.2, chr-F: 0.344\ntestset: URL, BLEU: 36.3, chr-F: 0.589\ntestset: URL, BLEU: 3.5, chr-F: 0.270\ntestset: URL, BLEU: 0.4, chr-F: 0.096\ntestset: URL, BLEU: 3.9, chr-F: 0.376\ntestset: URL, BLEU: 68.7, chr-F: 0.786\ntestset: URL, BLEU: 71.4, chr-F: 0.554\ntestset: URL, BLEU: 3.7, chr-F: 0.220\ntestset: URL, BLEU: 4.9, chr-F: 0.219\ntestset: URL, BLEU: 47.2, chr-F: 0.650\ntestset: URL, BLEU: 58.8, chr-F: 0.749\ntestset: URL, BLEU: 27.1, chr-F: 0.527\ntestset: URL, BLEU: 41.5, chr-F: 0.616\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 30.8, chr-F: 0.518\ntestset: URL, BLEU: 36.6, chr-F: 0.578\ntestset: URL, BLEU: 53.8, chr-F: 0.696\ntestset: URL, BLEU: 4.8, chr-F: 0.184\ntestset: URL, BLEU: 15.9, chr-F: 0.489\ntestset: URL, BLEU: 21.7, chr-F: 0.544\ntestset: URL, BLEU: 13.0, chr-F: 0.252\ntestset: URL, BLEU: 37.5, chr-F: 0.566\ntestset: URL, BLEU: 0.6, chr-F: 0.131\ntestset: URL, BLEU: 20.0, chr-F: 0.580\ntestset: URL, BLEU: 16.5, chr-F: 0.389\ntestset: URL, BLEU: 19.6, chr-F: 0.450\ntestset: URL, BLEU: 34.5, chr-F: 0.319\ntestset: URL, BLEU: 3.2, chr-F: 0.196\ntestset: URL, BLEU: 32.6, chr-F: 0.517\ntestset: URL, BLEU: 28.4, chr-F: 0.503\ntestset: URL, BLEU: 24.3, chr-F: 0.465\ntestset: URL, BLEU: 0.2, chr-F: 0.043\ntestset: URL, BLEU: 2.4, chr-F: 0.020\ntestset: URL, BLEU: 4.4, chr-F: 0.178\ntestset: URL, BLEU: 11.3, chr-F: 0.378\ntestset: URL, BLEU: 37.8, chr-F: 0.579\ntestset: URL, BLEU: 0.1, chr-F: 0.082\ntestset: URL, BLEU: 3.3, chr-F: 0.050\ntestset: URL, BLEU: 27.1, chr-F: 0.485\ntestset: URL, BLEU: 34.7, chr-F: 0.539\ntestset: URL, BLEU: 6.7, chr-F: 0.331\ntestset: URL, BLEU: 4.5, chr-F: 0.235\ntestset: URL, BLEU: 31.9, chr-F: 0.527\ntestset: URL, BLEU: 0.2, chr-F: 0.101\ntestset: URL, BLEU: 13.7, chr-F: 0.358\ntestset: URL, BLEU: 7.2, chr-F: 0.304\ntestset: URL, BLEU: 8.9, chr-F: 0.349\ntestset: URL, BLEU: 28.9, chr-F: 0.513\ntestset: URL, BLEU: 0.7, chr-F: 0.157\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.1, chr-F: 0.005\ntestset: URL, BLEU: 0.2, chr-F: 0.073\ntestset: URL, BLEU: 23.2, chr-F: 0.470\ntestset: URL, BLEU: 12.5, chr-F: 0.367\ntestset: URL, BLEU: 5.4, chr-F: 0.249\ntestset: URL, BLEU: 12.9, chr-F: 0.263\ntestset: URL, BLEU: 16.5, chr-F: 0.395\ntestset: URL, BLEU: 29.2, chr-F: 0.536\ntestset: URL, BLEU: 0.6, chr-F: 0.092\ntestset: URL, BLEU: 11.2, chr-F: 0.183\ntestset: URL, BLEU: 0.3, chr-F: 0.112\ntestset: URL, BLEU: 6.4, chr-F: 0.301\ntestset: URL, BLEU: 29.6, chr-F: 0.502\ntestset: URL, BLEU: 17.4, chr-F: 0.445\ntestset: URL, BLEU: 18.5, chr-F: 0.380\ntestset: URL, BLEU: 7.9, chr-F: 0.245\ntestset: URL, BLEU: 21.9, chr-F: 0.449\ntestset: URL, BLEU: 21.9, chr-F: 0.478\ntestset: URL, BLEU: 13.6, chr-F: 0.391\ntestset: URL, BLEU: 37.2, chr-F: 0.574\ntestset: URL, BLEU: 34.5, chr-F: 0.562\ntestset: URL, BLEU: 4.7, chr-F: 0.261\ntestset: URL, BLEU: 0.2, chr-F: 0.006\ntestset: URL, BLEU: 0.6, chr-F: 0.064\ntestset: URL, BLEU: 0.2, chr-F: 0.064\ntestset: URL, BLEU: 23.6, chr-F: 0.477\ntestset: URL, BLEU: 25.1, chr-F: 0.480\ntestset: URL, BLEU: 0.2, chr-F: 0.070\ntestset: URL, BLEU: 0.2, chr-F: 0.059\ntestset: URL, BLEU: 5.2, chr-F: 0.179\ntestset: URL, BLEU: 25.7, chr-F: 0.484\ntestset: URL, BLEU: 27.1, chr-F: 0.494\ntestset: URL, BLEU: 1.6, chr-F: 0.076\ntestset: URL, BLEU: 10.8, chr-F: 0.281\ntestset: URL, BLEU: 8.1, chr-F: 0.251\ntestset: URL, BLEU: 31.5, chr-F: 0.534\ntestset: URL, BLEU: 0.6, chr-F: 0.144\ntestset: URL, BLEU: 39.1, chr-F: 0.572\ntestset: URL, BLEU: 0.1, chr-F: 0.088\ntestset: URL, BLEU: 13.1, chr-F: 0.406\ntestset: URL, BLEU: 27.2, chr-F: 0.489\ntestset: URL, BLEU: 13.4, chr-F: 0.350\ntestset: URL, BLEU: 6.0, chr-F: 0.262\ntestset: URL, BLEU: 14.1, chr-F: 0.366\ntestset: URL, BLEU: 19.0, chr-F: 0.424\ntestset: URL, BLEU: 15.4, chr-F: 0.342\ntestset: URL, BLEU: 15.2, chr-F: 0.315\ntestset: URL, BLEU: 35.4, chr-F: 0.394\ntestset: URL, BLEU: 12.6, chr-F: 0.401\ntestset: URL, BLEU: 2.9, chr-F: 0.168\ntestset: URL, BLEU: 5.2, chr-F: 0.207\ntestset: URL, BLEU: 6.4, chr-F: 0.215\ntestset: URL, BLEU: 1.6, chr-F: 0.180\ntestset: URL, BLEU: 3.9, chr-F: 0.199\ntestset: URL, BLEU: 26.6, chr-F: 0.483\ntestset: URL, BLEU: 20.2, chr-F: 0.398\ntestset: URL, BLEU: 12.1, chr-F: 0.380\ntestset: URL, BLEU: 0.7, chr-F: 0.039\ntestset: URL, BLEU: 53.7, chr-F: 0.513\ntestset: URL, BLEU: 30.5, chr-F: 0.503\ntestset: URL, BLEU: 43.1, chr-F: 0.589\ntestset: URL, BLEU: 12.7, chr-F: 0.541\ntestset: URL, BLEU: 5.3, chr-F: 0.210\ntestset: URL, BLEU: 39.5, chr-F: 0.563\ntestset: URL, BLEU: 11.6, chr-F: 0.343\ntestset: URL, BLEU: 30.9, chr-F: 0.524\ntestset: URL, BLEU: 57.6, chr-F: 0.572\ntestset: URL, BLEU: 4.9, chr-F: 0.244\ntestset: URL, BLEU: 38.0, chr-F: 0.562\ntestset: URL, BLEU: 40.8, chr-F: 0.615\ntestset: URL, BLEU: 72.6, chr-F: 0.846\ntestset: URL, BLEU: 26.8, chr-F: 0.514\ntestset: URL, BLEU: 27.1, chr-F: 0.493\ntestset: URL, BLEU: 30.8, chr-F: 0.512\ntestset: URL, BLEU: 30.8, chr-F: 0.475\ntestset: URL, BLEU: 36.0, chr-F: 0.521\ntestset: URL, BLEU: 12.6, chr-F: 0.364\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 46.1, chr-F: 0.633\ntestset: URL, BLEU: 5.1, chr-F: 0.136\ntestset: URL, BLEU: 5.1, chr-F: 0.199\ntestset: URL, BLEU: 0.8, chr-F: 0.208\ntestset: URL, BLEU: 16.8, chr-F: 0.380\ntestset: URL, BLEU: 0.2, chr-F: 0.002\ntestset: URL, BLEU: 16.6, chr-F: 0.415\ntestset: URL, BLEU: 7.0, chr-F: 0.321\ntestset: URL, BLEU: 0.2, chr-F: 0.003\ntestset: URL, BLEU: 6.6, chr-F: 0.251\ntestset: URL, BLEU: 31.5, chr-F: 0.513\ntestset: URL, BLEU: 33.5, chr-F: 0.550\ntestset: URL, BLEU: 25.6, chr-F: 0.466\ntestset: URL, BLEU: 0.1, chr-F: 0.035\ntestset: URL, BLEU: 0.8, chr-F: 0.135\ntestset: URL, BLEU: 1.4, chr-F: 0.194\ntestset: URL, BLEU: 18.8, chr-F: 0.422\ntestset: URL, BLEU: 41.2, chr-F: 0.591\ntestset: URL, BLEU: 27.9, chr-F: 0.503\ntestset: URL, BLEU: 0.7, chr-F: 0.125\ntestset: URL, BLEU: 0.1, chr-F: 0.062\ntestset: URL, BLEU: 30.7, chr-F: 0.540\ntestset: URL, BLEU: 4.9, chr-F: 0.283\ntestset: URL, BLEU: 3.9, chr-F: 0.217\ntestset: URL, BLEU: 5.9, chr-F: 0.276\ntestset: URL, BLEU: 4.8, chr-F: 0.239\ntestset: URL, BLEU: 34.6, chr-F: 0.551\ntestset: URL, BLEU: 0.2, chr-F: 0.099\ntestset: URL, BLEU: 5.5, chr-F: 0.040\ntestset: URL, BLEU: 13.1, chr-F: 0.357\ntestset: URL, BLEU: 0.4, chr-F: 0.085\ntestset: URL, BLEU: 7.4, chr-F: 0.293\ntestset: URL, BLEU: 20.0, chr-F: 0.415\ntestset: URL, BLEU: 29.9, chr-F: 0.528\ntestset: URL, BLEU: 5.9, chr-F: 0.220\ntestset: URL, BLEU: 0.5, chr-F: 0.137\ntestset: URL, BLEU: 0.1, chr-F: 0.009\ntestset: URL, BLEU: 0.0, chr-F: 0.005\ntestset: URL, BLEU: 0.5, chr-F: 0.103\ntestset: URL, BLEU: 6.4, chr-F: 0.241\ntestset: URL, BLEU: 28.2, chr-F: 0.460\ntestset: URL, BLEU: 26.0, chr-F: 0.485\ntestset: URL, BLEU: 0.8, chr-F: 0.228\ntestset: URL, BLEU: 11.2, chr-F: 0.364\ntestset: URL, BLEU: 10.6, chr-F: 0.277\ntestset: URL, BLEU: 10.9, chr-F: 0.307\ntestset: URL, BLEU: 13.8, chr-F: 0.368\ntestset: URL, BLEU: 33.8, chr-F: 0.571\ntestset: URL, BLEU: 3.0, chr-F: 0.007\ntestset: URL, BLEU: 4.8, chr-F: 0.005\ntestset: URL, BLEU: 0.4, chr-F: 0.092\ntestset: URL, BLEU: 9.0, chr-F: 0.174\ntestset: URL, BLEU: 0.5, chr-F: 0.144\ntestset: URL, BLEU: 0.1, chr-F: 0.000\ntestset: URL, BLEU: 7.7, chr-F: 0.333\ntestset: URL, BLEU: 25.1, chr-F: 0.480\ntestset: URL, BLEU: 0.4, chr-F: 0.101\ntestset: URL, BLEU: 21.0, chr-F: 0.492\ntestset: URL, BLEU: 0.5, chr-F: 0.143\ntestset: URL, BLEU: 0.5, chr-F: 0.135\ntestset: URL, BLEU: 15.6, chr-F: 0.345\ntestset: URL, BLEU: 9.3, chr-F: 0.251\ntestset: URL, BLEU: 9.5, chr-F: 0.326\ntestset: URL, BLEU: 54.1, chr-F: 0.747\ntestset: URL, BLEU: 29.8, chr-F: 0.503\ntestset: URL, BLEU: 20.0, chr-F: 0.449\ntestset: URL, BLEU: 9.3, chr-F: 0.231\ntestset: URL, BLEU: 12.2, chr-F: 0.357\ntestset: URL, BLEU: 0.2, chr-F: 0.003\ntestset: URL, BLEU: 37.1, chr-F: 0.570\ntestset: URL, BLEU: 0.5, chr-F: 0.078\ntestset: URL, BLEU: 38.4, chr-F: 0.575\ntestset: URL, BLEU: 4.8, chr-F: 0.249\ntestset: URL, BLEU: 2.8, chr-F: 0.185\ntestset: URL, BLEU: 0.1, chr-F: 0.011\ntestset: URL, BLEU: 2.6, chr-F: 0.166\ntestset: URL, BLEU: 2.6, chr-F: 0.214\ntestset: URL, BLEU: 39.8, chr-F: 0.566\ntestset: URL, BLEU: 1.0, chr-F: 0.131\ntestset: URL, BLEU: 0.9, chr-F: 0.124\ntestset: URL, BLEU: 26.2, chr-F: 0.500\ntestset: URL, BLEU: 31.5, chr-F: 0.545\ntestset: URL, BLEU: 0.2, chr-F: 0.088\ntestset: URL, BLEU: 0.4, chr-F: 0.108\ntestset: URL, BLEU: 1.8, chr-F: 0.192\ntestset: URL, BLEU: 7.6, chr-F: 0.313\ntestset: URL, BLEU: 27.6, chr-F: 0.508\ntestset: URL, BLEU: 0.1, chr-F: 0.011\ntestset: URL, BLEU: 28.6, chr-F: 0.496\ntestset: URL, BLEU: 2.0, chr-F: 0.098\ntestset: URL, BLEU: 0.9, chr-F: 0.080\ntestset: URL, BLEU: 24.5, chr-F: 0.501\ntestset: URL, BLEU: 1.3, chr-F: 0.105\ntestset: URL, BLEU: 3.0, chr-F: 0.178\ntestset: URL, BLEU: 12.5, chr-F: 0.298\ntestset: URL, BLEU: 1.7, chr-F: 0.214\ntestset: URL, BLEU: 36.3, chr-F: 0.575\ntestset: URL, BLEU: 22.1, chr-F: 0.459\ntestset: URL, BLEU: 5.2, chr-F: 0.316\ntestset: URL, BLEU: 42.4, chr-F: 0.591\ntestset: URL, BLEU: 0.6, chr-F: 0.145\ntestset: URL, BLEU: 1.9, chr-F: 0.255\ntestset: URL, BLEU: 0.3, chr-F: 0.054\ntestset: URL, BLEU: 27.3, chr-F: 0.478\ntestset: URL, BLEU: 7.0, chr-F: 0.310\ntestset: URL, BLEU: 0.9, chr-F: 0.116\ntestset: URL, BLEU: 4.0, chr-F: 0.164\ntestset: URL, BLEU: 5.9, chr-F: 0.260\ntestset: URL, BLEU: 0.4, chr-F: 0.071\ntestset: URL, BLEU: 20.1, chr-F: 0.420\ntestset: URL, BLEU: 0.6, chr-F: 0.057\ntestset: URL, BLEU: 22.8, chr-F: 0.278\ntestset: URL, BLEU: 9.0, chr-F: 0.360\ntestset: URL, BLEU: 19.0, chr-F: 0.324\ntestset: URL, BLEU: 35.8, chr-F: 0.523\ntestset: URL, BLEU: 35.7, chr-F: 0.495\ntestset: URL, BLEU: 42.7, chr-F: 0.644\ntestset: URL, BLEU: 22.4, chr-F: 0.477\ntestset: URL, BLEU: 4.3, chr-F: 0.141\ntestset: URL, BLEU: 9.0, chr-F: 0.345\ntestset: URL, BLEU: 16.0, chr-F: 0.289\ntestset: URL, BLEU: 4.1, chr-F: 0.143\ntestset: URL, BLEU: 3.0, chr-F: 0.247\ntestset: URL, BLEU: 11.6, chr-F: 0.294\ntestset: URL, BLEU: 19.0, chr-F: 0.220\ntestset: URL, BLEU: 4.8, chr-F: 0.188\ntestset: URL, BLEU: 6.1, chr-F: 0.136\ntestset: URL, BLEU: 16.0, chr-F: 0.054\ntestset: URL, BLEU: 0.7, chr-F: 0.124\ntestset: URL, BLEU: 5.4, chr-F: 0.238\ntestset: URL, BLEU: 10.5, chr-F: 0.155\ntestset: URL, BLEU: 18.6, chr-F: 0.427\ntestset: URL, BLEU: 38.9, chr-F: 0.611\ntestset: URL, BLEU: 6.8, chr-F: 0.276\ntestset: URL, BLEU: 10.5, chr-F: 0.138\ntestset: URL, BLEU: 12.7, chr-F: 0.088\ntestset: URL, BLEU: 7.6, chr-F: 0.109\ntestset: URL, BLEU: 18.8, chr-F: 0.254\ntestset: URL, BLEU: 21.4, chr-F: 0.339\ntestset: URL, BLEU: 4.0, chr-F: 0.440\ntestset: URL, BLEU: 5.3, chr-F: 0.231\ntestset: URL, BLEU: 24.9, chr-F: 0.420\ntestset: URL, BLEU: 0.0, chr-F: 0.056\ntestset: URL, BLEU: 16.0, chr-F: 0.171\ntestset: URL, BLEU: 2.1, chr-F: 0.258\ntestset: URL, BLEU: 43.5, chr-F: 0.557\ntestset: URL, BLEU: 21.3, chr-F: 0.402\ntestset: URL, BLEU: 3.0, chr-F: 0.164\ntestset: URL, BLEU: 12.7, chr-F: 0.142\ntestset: URL, BLEU: 10.5, chr-F: 0.131\ntestset: URL, BLEU: 0.6, chr-F: 0.087\ntestset: URL, BLEU: 26.2, chr-F: 0.443\ntestset: URL, BLEU: 3.6, chr-F: 0.176\ntestset: URL, BLEU: 0.0, chr-F: 0.632\ntestset: URL, BLEU: 5.8, chr-F: 0.163\ntestset: URL, BLEU: 14.5, chr-F: 0.104\ntestset: URL, BLEU: 53.7, chr-F: 0.504\ntestset: URL, BLEU: 8.5, chr-F: 0.311\ntestset: URL, BLEU: 8.7, chr-F: 0.259\ntestset: URL, BLEU: 10.3, chr-F: 0.303\ntestset: URL, BLEU: 1.3, chr-F: 0.006\ntestset: URL, BLEU: 8.6, chr-F: 0.331\ntestset: URL, BLEU: 7.2, chr-F: 0.301\ntestset: URL, BLEU: 0.4, chr-F: 0.074\ntestset: URL, BLEU: 14.4, chr-F: 0.256\ntestset: URL, BLEU: 9.8, chr-F: 0.325\ntestset: URL, BLEU: 6.6, chr-F: 0.127\ntestset: URL, BLEU: 50.0, chr-F: 0.657\ntestset: URL, BLEU: 4.5, chr-F: 0.223\ntestset: URL, BLEU: 8.6, chr-F: 0.316\ntestset: URL, BLEU: 19.1, chr-F: 0.445\ntestset: URL, BLEU: 9.8, chr-F: 0.313\ntestset: URL, BLEU: 9.1, chr-F: 0.318\ntestset: URL, BLEU: 4.8, chr-F: 0.213\ntestset: URL, BLEU: 2.0, chr-F: 0.138\ntestset: URL, BLEU: 49.7, chr-F: 0.630\ntestset: URL, BLEU: 1.0, chr-F: 0.105\ntestset: URL, BLEU: 0.0, chr-F: 0.011\ntestset: URL, BLEU: 4.1, chr-F: 0.194\ntestset: URL, BLEU: 23.0, chr-F: 0.410\ntestset: URL, BLEU: 22.2, chr-F: 0.448\ntestset: URL, BLEU: 6.4, chr-F: 0.341\ntestset: URL, BLEU: 1.2, chr-F: 0.035\ntestset: URL, BLEU: 3.4, chr-F: 0.204\ntestset: URL, BLEU: 31.2, chr-F: 0.528\ntestset: URL, BLEU: 33.9, chr-F: 0.570\ntestset: URL, BLEU: 26.9, chr-F: 0.490\ntestset: URL, BLEU: 0.2, chr-F: 0.039\ntestset: URL, BLEU: 0.3, chr-F: 0.061\ntestset: URL, BLEU: 17.3, chr-F: 0.455\ntestset: URL, BLEU: 47.1, chr-F: 0.634\ntestset: URL, BLEU: 31.1, chr-F: 0.530\ntestset: URL, BLEU: 0.7, chr-F: 0.061\ntestset: URL, BLEU: 32.4, chr-F: 0.544\ntestset: URL, BLEU: 40.1, chr-F: 0.583\ntestset: URL, BLEU: 5.1, chr-F: 0.207\ntestset: URL, BLEU: 1.8, chr-F: 0.304\ntestset: URL, BLEU: 5.6, chr-F: 0.233\ntestset: URL, BLEU: 0.3, chr-F: 0.149\ntestset: URL, BLEU: 6.4, chr-F: 0.412\ntestset: URL, BLEU: 11.4, chr-F: 0.357\ntestset: URL, BLEU: 0.1, chr-F: 0.067\ntestset: URL, BLEU: 9.1, chr-F: 0.316\ntestset: URL, BLEU: 16.8, chr-F: 0.416\ntestset: URL, BLEU: 34.5, chr-F: 0.562\ntestset: URL, BLEU: 5.5, chr-F: 0.204\ntestset: URL, BLEU: 0.2, chr-F: 0.001\ntestset: URL, BLEU: 0.1, chr-F: 0.006\ntestset: URL, BLEU: 20.8, chr-F: 0.424\ntestset: URL, BLEU: 28.9, chr-F: 0.511\ntestset: URL, BLEU: 5.1, chr-F: 0.336\ntestset: URL, BLEU: 11.5, chr-F: 0.401\ntestset: URL, BLEU: 17.2, chr-F: 0.362\ntestset: URL, BLEU: 37.7, chr-F: 0.606\ntestset: URL, BLEU: 2.8, chr-F: 0.148\ntestset: URL, BLEU: 14.3, chr-F: 0.188\ntestset: URL, BLEU: 0.4, chr-F: 0.129\ntestset: URL, BLEU: 2.8, chr-F: 0.258\ntestset: URL, BLEU: 30.3, chr-F: 0.490\ntestset: URL, BLEU: 0.3, chr-F: 0.099\ntestset: URL, BLEU: 18.3, chr-F: 0.461\ntestset: URL, BLEU: 0.6, chr-F: 0.185\ntestset: URL, BLEU: 1.2, chr-F: 0.163\ntestset: URL, BLEU: 15.3, chr-F: 0.385\ntestset: URL, BLEU: 45.7, chr-F: 0.393\ntestset: URL, BLEU: 29.5, chr-F: 0.498\ntestset: URL, BLEU: 19.4, chr-F: 0.456\ntestset: URL, BLEU: 12.9, chr-F: 0.356\ntestset: URL, BLEU: 33.0, chr-F: 0.532\ntestset: URL, BLEU: 1.2, chr-F: 0.072\ntestset: URL, BLEU: 35.1, chr-F: 0.553\ntestset: URL, BLEU: 6.8, chr-F: 0.313\ntestset: URL, BLEU: 0.2, chr-F: 0.004\ntestset: URL, BLEU: 3.6, chr-F: 0.112\ntestset: URL, BLEU: 78.3, chr-F: 0.917\ntestset: URL, BLEU: 0.1, chr-F: 0.084\ntestset: URL, BLEU: 0.3, chr-F: 0.117\ntestset: URL, BLEU: 22.4, chr-F: 0.468\ntestset: URL, BLEU: 33.0, chr-F: 0.559\ntestset: URL, BLEU: 0.6, chr-F: 0.084\ntestset: URL, BLEU: 5.9, chr-F: 0.278\ntestset: URL, BLEU: 4.2, chr-F: 0.257\ntestset: URL, BLEU: 29.7, chr-F: 0.531\ntestset: URL, BLEU: 28.8, chr-F: 0.498\ntestset: URL, BLEU: 0.4, chr-F: 0.056\ntestset: URL, BLEU: 1.7, chr-F: 0.222\ntestset: URL, BLEU: 2.4, chr-F: 0.207\ntestset: URL, BLEU: 38.6, chr-F: 0.598\ntestset: URL, BLEU: 23.9, chr-F: 0.455\ntestset: URL, BLEU: 1.2, chr-F: 0.159\ntestset: URL, BLEU: 44.2, chr-F: 0.609\ntestset: URL, BLEU: 2.4, chr-F: 0.123\ntestset: URL, BLEU: 2.8, chr-F: 0.244\ntestset: URL, BLEU: 0.5, chr-F: 0.034\ntestset: URL, BLEU: 26.7, chr-F: 0.474\ntestset: URL, BLEU: 2.3, chr-F: 0.333\ntestset: URL, BLEU: 0.6, chr-F: 0.088\ntestset: URL, BLEU: 5.3, chr-F: 0.178\ntestset: URL, BLEU: 8.7, chr-F: 0.271\ntestset: URL, BLEU: 19.2, chr-F: 0.394\ntestset: URL, BLEU: 12.3, chr-F: 0.482\ntestset: URL, BLEU: 8.3, chr-F: 0.286\ntestset: URL, BLEU: 6.1, chr-F: 0.181\ntestset: URL, BLEU: 12.7, chr-F: 0.535\ntestset: URL, BLEU: 4.1, chr-F: 0.144\ntestset: URL, BLEU: 0.5, chr-F: 0.033\ntestset: URL, BLEU: 12.4, chr-F: 0.127\ntestset: URL, BLEU: 6.9, chr-F: 0.233\ntestset: URL, BLEU: 0.5, chr-F: 0.045\ntestset: URL, BLEU: 0.0, chr-F: 0.244\ntestset: URL, BLEU: 4.2, chr-F: 0.280\ntestset: URL, BLEU: 21.7, chr-F: 0.448\ntestset: URL, BLEU: 22.9, chr-F: 0.431\ntestset: URL, BLEU: 10.7, chr-F: 0.140\ntestset: URL, BLEU: 31.8, chr-F: 0.455\ntestset: URL, BLEU: 0.5, chr-F: 0.040\ntestset: URL, BLEU: 0.7, chr-F: 0.204\ntestset: URL, BLEU: 34.8, chr-F: 0.528\ntestset: URL, BLEU: 8.1, chr-F: 0.318\ntestset: URL, BLEU: 21.4, chr-F: 0.324\ntestset: URL, BLEU: 0.1, chr-F: 0.000\ntestset: URL, BLEU: 6.6, chr-F: 0.127\ntestset: URL, BLEU: 35.7, chr-F: 0.576\ntestset: URL, BLEU: 32.6, chr-F: 0.511\ntestset: URL, BLEU: 17.7, chr-F: 0.342\ntestset: URL, BLEU: 12.1, chr-F: 0.304\ntestset: URL, BLEU: 31.7, chr-F: 0.438\ntestset: URL, BLEU: 30.6, chr-F: 0.479\ntestset: URL, BLEU: 0.5, chr-F: 0.156\ntestset: URL, BLEU: 27.5, chr-F: 0.247\ntestset: URL, BLEU: 16.1, chr-F: 0.330\ntestset: URL, BLEU: 4.0, chr-F: 0.167\ntestset: URL, BLEU: 13.2, chr-F: 0.257\ntestset: URL, BLEU: 6.0, chr-F: 0.241\ntestset: URL, BLEU: 0.0, chr-F: 0.170\ntestset: URL, BLEU: 0.0, chr-F: 0.427\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 31.8, chr-F: 0.374\ntestset: URL, BLEU: 11.5, chr-F: 0.416\ntestset: URL, BLEU: 15.1, chr-F: 0.348\ntestset: URL, BLEU: 17.5, chr-F: 0.329\ntestset: URL, BLEU: 13.1, chr-F: 0.346\ntestset: URL, BLEU: 12.1, chr-F: 0.306\ntestset: URL, BLEU: 8.0, chr-F: 0.035\ntestset: URL, BLEU: 20.8, chr-F: 0.299\ntestset: URL, BLEU: 13.7, chr-F: 0.355\ntestset: URL, BLEU: 24.7, chr-F: 0.423\ntestset: URL, BLEU: 12.7, chr-F: 0.322\ntestset: URL, BLEU: 7.8, chr-F: 0.288\ntestset: URL, BLEU: 13.5, chr-F: 0.390\ntestset: URL, BLEU: 32.0, chr-F: 0.490\ntestset: URL, BLEU: 5.0, chr-F: 0.135\ntestset: URL, BLEU: 18.0, chr-F: 0.403\ntestset: URL, BLEU: 16.9, chr-F: 0.377\ntestset: URL, BLEU: 0.0, chr-F: 0.077\ntestset: URL, BLEU: 2.4, chr-F: 0.328\ntestset: URL, BLEU: 0.0, chr-F: 0.673\ntestset: URL, BLEU: 2.5, chr-F: 0.139\ntestset: URL, BLEU: 24.5, chr-F: 0.458\ntestset: URL, BLEU: 13.3, chr-F: 0.324\ntestset: URL, BLEU: 30.4, chr-F: 0.539\ntestset: URL, BLEU: 30.2, chr-F: 0.448\ntestset: URL, BLEU: 37.9, chr-F: 0.571\ntestset: URL, BLEU: 45.8, chr-F: 0.627\ntestset: URL, BLEU: 31.1, chr-F: 0.561\ntestset: URL, BLEU: 36.2, chr-F: 0.573\ntestset: URL, BLEU: 22.7, chr-F: 0.524\ntestset: URL, BLEU: 47.4, chr-F: 0.674\ntestset: URL, BLEU: 28.4, chr-F: 0.465\ntestset: URL, BLEU: 53.2, chr-F: 0.704\ntestset: URL, BLEU: 1.4, chr-F: 0.140\ntestset: URL, BLEU: 3.2, chr-F: 0.104\ntestset: URL, BLEU: 9.9, chr-F: 0.243\ntestset: URL, BLEU: 6.2, chr-F: 0.269\ntestset: URL, BLEU: 0.0, chr-F: 0.056\ntestset: URL, BLEU: 6.6, chr-F: 0.107\ntestset: URL, BLEU: 12.0, chr-F: 0.356\ntestset: URL, BLEU: 15.7, chr-F: 0.384\ntestset: URL, BLEU: 14.8, chr-F: 0.320\ntestset: URL, BLEU: 4.1, chr-F: 0.292\ntestset: URL, BLEU: 19.0, chr-F: 0.111\ntestset: URL, BLEU: 8.4, chr-F: 0.321\ntestset: URL, BLEU: 0.9, chr-F: 0.064\ntestset: URL, BLEU: 13.5, chr-F: 0.361\ntestset: URL, BLEU: 8.2, chr-F: 0.228\ntestset: URL, BLEU: 31.9, chr-F: 0.610\ntestset: URL, BLEU: 0.0, chr-F: 0.050\ntestset: URL, BLEU: 0.5, chr-F: 0.010\ntestset: URL, BLEU: 4.5, chr-F: 0.206\ntestset: URL, BLEU: 4.2, chr-F: 0.220\ntestset: URL, BLEU: 3.9, chr-F: 0.202\ntestset: URL, BLEU: 16.8, chr-F: 0.389\ntestset: URL, BLEU: 5.2, chr-F: 0.298\ntestset: URL, BLEU: 24.7, chr-F: 0.406\ntestset: URL, BLEU: 0.4, chr-F: 0.137\ntestset: URL, BLEU: 16.8, chr-F: 0.310\ntestset: URL, BLEU: 5.4, chr-F: 0.370\ntestset: URL, BLEU: 4.3, chr-F: 0.170\ntestset: URL, BLEU: 0.6, chr-F: 0.044\ntestset: URL, BLEU: 0.1, chr-F: 0.050\ntestset: URL, BLEU: 0.2, chr-F: 0.064\ntestset: URL, BLEU: 3.1, chr-F: 0.013\ntestset: URL, BLEU: 0.2, chr-F: 0.050\ntestset: URL, BLEU: 2.7, chr-F: 0.155\ntestset: URL, BLEU: 4.7, chr-F: 0.198\ntestset: URL, BLEU: 1.9, chr-F: 0.146\ntestset: URL, BLEU: 12.8, chr-F: 0.234\ntestset: URL, BLEU: 0.5, chr-F: 0.114\ntestset: URL, BLEU: 0.8, chr-F: 0.163\ntestset: URL, BLEU: 2.4, chr-F: 0.141\ntestset: URL, BLEU: 12.6, chr-F: 0.393\ntestset: URL, BLEU: 15.9, chr-F: 0.322\ntestset: URL, BLEU: 19.0, chr-F: 0.308\ntestset: URL, BLEU: 15.9, chr-F: 0.301\ntestset: URL, BLEU: 14.7, chr-F: 0.250\ntestset: URL, BLEU: 38.5, chr-F: 0.522\ntestset: URL, BLEU: 17.6, chr-F: 0.424\ntestset: URL, BLEU: 32.0, chr-F: 0.472\ntestset: URL, BLEU: 31.2, chr-F: 0.496\ntestset: URL, BLEU: 40.1, chr-F: 0.579\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 27.8, chr-F: 0.543\ntestset: URL, BLEU: 32.9, chr-F: 0.545\ntestset: URL, BLEU: 38.6, chr-F: 0.563\ntestset: URL, BLEU: 2.3, chr-F: 0.299\ntestset: URL, BLEU: 33.3, chr-F: 0.548\ntestset: URL, BLEU: 37.9, chr-F: 0.602\ntestset: URL, BLEU: 9.8, chr-F: 0.289\ntestset: URL, BLEU: 38.0, chr-F: 0.718\ntestset: URL, BLEU: 31.8, chr-F: 0.528\ntestset: URL, BLEU: 31.7, chr-F: 0.548\ntestset: URL, BLEU: 28.1, chr-F: 0.484\ntestset: URL, BLEU: 38.9, chr-F: 0.596\ntestset: URL, BLEU: 38.6, chr-F: 0.589\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 36.0, chr-F: 0.557\ntestset: URL, BLEU: 8.1, chr-F: 0.441\ntestset: URL, BLEU: 8.9, chr-F: 0.439\ntestset: URL, BLEU: 8.8, chr-F: 0.288\ntestset: URL, BLEU: 26.1, chr-F: 0.414\ntestset: URL, BLEU: 25.5, chr-F: 0.440\ntestset: URL, BLEU: 30.1, chr-F: 0.449\ntestset: URL, BLEU: 12.6, chr-F: 0.412\ntestset: URL, BLEU: 9.9, chr-F: 0.416\ntestset: URL, BLEU: 8.4, chr-F: 0.289\ntestset: URL, BLEU: 21.2, chr-F: 0.395\ntestset: URL, BLEU: 25.9, chr-F: 0.384\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 10.4, chr-F: 0.376\ntestset: URL, BLEU: 18.1, chr-F: 0.373\ntestset: URL, BLEU: 24.4, chr-F: 0.467\ntestset: URL, BLEU: 42.9, chr-F: 0.583\ntestset: URL, BLEU: 19.5, chr-F: 0.444\ntestset: URL, BLEU: 11.6, chr-F: 0.323\ntestset: URL, BLEU: 22.1, chr-F: 0.398\ntestset: URL, BLEU: 32.1, chr-F: 0.386\ntestset: URL, BLEU: 21.9, chr-F: 0.407\ntestset: URL, BLEU: 29.3, chr-F: 0.476\ntestset: URL, BLEU: 40.5, chr-F: 0.708\ntestset: URL, BLEU: 0.0, chr-F: 0.034\ntestset: URL, BLEU: 38.1, chr-F: 0.582\ntestset: URL, BLEU: 31.8, chr-F: 0.511\ntestset: URL, BLEU: 29.8, chr-F: 0.483\ntestset: URL, BLEU: 39.8, chr-F: 0.336\ntestset: URL, BLEU: 26.3, chr-F: 0.441\ntestset: URL, BLEU: 27.3, chr-F: 0.469\ntestset: URL, BLEU: 1.9, chr-F: 0.047\ntestset: URL, BLEU: 28.9, chr-F: 0.501\ntestset: URL, BLEU: 2.6, chr-F: 0.135\ntestset: URL, BLEU: 59.6, chr-F: 0.740\ntestset: URL, BLEU: 0.1, chr-F: 0.012\ntestset: URL, BLEU: 40.2, chr-F: 0.566\ntestset: URL, BLEU: 19.7, chr-F: 0.358\ntestset: URL, BLEU: 17.4, chr-F: 0.465\ntestset: URL, BLEU: 18.0, chr-F: 0.386\ntestset: URL, BLEU: 30.7, chr-F: 0.496\ntestset: URL, BLEU: 10.7, chr-F: 0.133\ntestset: URL, BLEU: 38.1, chr-F: 0.539\ntestset: URL, BLEU: 53.2, chr-F: 0.676\ntestset: URL, BLEU: 3.8, chr-F: 0.125\ntestset: URL, BLEU: 3.4, chr-F: 0.252\ntestset: URL, BLEU: 24.2, chr-F: 0.460\ntestset: URL, BLEU: 12.1, chr-F: 0.427\ntestset: URL, BLEU: 4.7, chr-F: 0.287\ntestset: URL, BLEU: 27.8, chr-F: 0.482\ntestset: URL, BLEU: 40.6, chr-F: 0.608\ntestset: URL, BLEU: 23.1, chr-F: 0.450\ntestset: URL, BLEU: 0.8, chr-F: 0.060\ntestset: URL, BLEU: 10.1, chr-F: 0.375\ntestset: URL, BLEU: 38.9, chr-F: 0.577\ntestset: URL, BLEU: 31.7, chr-F: 0.539\ntestset: URL, BLEU: 0.2, chr-F: 0.061\ntestset: URL, BLEU: 31.5, chr-F: 0.539\ntestset: URL, BLEU: 47.4, chr-F: 0.633\ntestset: URL, BLEU: 6.4, chr-F: 0.247\ntestset: URL, BLEU: 4.2, chr-F: 0.236\ntestset: URL, BLEU: 46.6, chr-F: 0.642\ntestset: URL, BLEU: 20.0, chr-F: 0.409\ntestset: URL, BLEU: 7.8, chr-F: 0.312\ntestset: URL, BLEU: 36.3, chr-F: 0.577\ntestset: URL, BLEU: 1.1, chr-F: 0.030\ntestset: URL, BLEU: 39.4, chr-F: 0.595\ntestset: URL, BLEU: 18.5, chr-F: 0.408\ntestset: URL, BLEU: 1.9, chr-F: 0.160\ntestset: URL, BLEU: 1.0, chr-F: 0.178\ntestset: URL, BLEU: 7.1, chr-F: 0.320\ntestset: URL, BLEU: 29.0, chr-F: 0.511\ntestset: URL, BLEU: 0.2, chr-F: 0.107\ntestset: URL, BLEU: 20.7, chr-F: 0.475\ntestset: URL, BLEU: 20.6, chr-F: 0.373\ntestset: URL, BLEU: 14.3, chr-F: 0.409\ntestset: URL, BLEU: 13.3, chr-F: 0.378\ntestset: URL, BLEU: 37.8, chr-F: 0.578\ntestset: URL, BLEU: 35.7, chr-F: 0.578\ntestset: URL, BLEU: 11.0, chr-F: 0.369\ntestset: URL, BLEU: 1.2, chr-F: 0.010\ntestset: URL, BLEU: 0.2, chr-F: 0.110\ntestset: URL, BLEU: 25.9, chr-F: 0.507\ntestset: URL, BLEU: 36.8, chr-F: 0.597\ntestset: URL, BLEU: 34.3, chr-F: 0.574\ntestset: URL, BLEU: 28.5, chr-F: 0.494\ntestset: URL, BLEU: 11.7, chr-F: 0.364\ntestset: URL, BLEU: 46.3, chr-F: 0.653\ntestset: URL, BLEU: 21.9, chr-F: 0.418\ntestset: URL, BLEU: 37.7, chr-F: 0.562\ntestset: URL, BLEU: 33.1, chr-F: 0.538\ntestset: URL, BLEU: 0.8, chr-F: 0.095\ntestset: URL, BLEU: 10.3, chr-F: 0.280\ntestset: URL, BLEU: 3.9, chr-F: 0.098\ntestset: URL, BLEU: 5.0, chr-F: 0.217\ntestset: URL, BLEU: 12.2, chr-F: 0.357\ntestset: URL, BLEU: 4.1, chr-F: 0.237\ntestset: URL, BLEU: 5.3, chr-F: 0.299\ntestset: URL, BLEU: 15.3, chr-F: 0.322\ntestset: URL, BLEU: 0.0, chr-F: 0.095\ntestset: URL, BLEU: 11.3, chr-F: 0.272\ntestset: URL, BLEU: 0.0, chr-F: 0.069\ntestset: URL, BLEU: 35.4, chr-F: 0.540\ntestset: URL, BLEU: 24.3, chr-F: 0.509\ntestset: URL, BLEU: 12.0, chr-F: 0.226\ntestset: URL, BLEU: 10.0, chr-F: 0.205\ntestset: URL, BLEU: 5.5, chr-F: 0.048\ntestset: URL, BLEU: 16.5, chr-F: 0.236\ntestset: URL, BLEU: 7.6, chr-F: 0.081\ntestset: URL, BLEU: 1.6, chr-F: 0.013\ntestset: URL, BLEU: 11.4, chr-F: 0.362\ntestset: URL, BLEU: 0.2, chr-F: 0.067\ntestset: URL, BLEU: 6.1, chr-F: 0.240\ntestset: URL, BLEU: 1.9, chr-F: 0.161\ntestset: URL, BLEU: 3.3, chr-F: 0.155\ntestset: URL, BLEU: 31.9, chr-F: 0.184\ntestset: URL, BLEU: 5.0, chr-F: 0.230\ntestset: URL, BLEU: 37.0, chr-F: 0.295\ntestset: URL, BLEU: 1.3, chr-F: 0.184\ntestset: URL, BLEU: 39.1, chr-F: 0.426\ntestset: URL, BLEU: 4.3, chr-F: 0.206\ntestset: URL, BLEU: 2.1, chr-F: 0.164\ntestset: URL, BLEU: 1.4, chr-F: 0.046\ntestset: URL, BLEU: 9.7, chr-F: 0.330\ntestset: URL, BLEU: 35.4, chr-F: 0.529\ntestset: URL, BLEU: 33.1, chr-F: 0.604\ntestset: URL, BLEU: 15.4, chr-F: 0.325\ntestset: URL, BLEU: 19.3, chr-F: 0.405\ntestset: URL, BLEU: 23.1, chr-F: 0.421\ntestset: URL, BLEU: 2.2, chr-F: 0.173\ntestset: URL, BLEU: 5.2, chr-F: 0.194\ntestset: URL, BLEU: 26.3, chr-F: 0.405\ntestset: URL, BLEU: 0.0, chr-F: 0.170\ntestset: URL, BLEU: 21.4, chr-F: 0.347\ntestset: URL, BLEU: 1.2, chr-F: 0.058\ntestset: URL, BLEU: 22.7, chr-F: 0.479\ntestset: URL, BLEU: 2.4, chr-F: 0.190\ntestset: URL, BLEU: 3.4, chr-F: 0.239\ntestset: URL, BLEU: 45.5, chr-F: 0.580\ntestset: URL, BLEU: 23.0, chr-F: 0.690\ntestset: URL, BLEU: 33.5, chr-F: 0.449\ntestset: URL, BLEU: 66.9, chr-F: 0.951\ntestset: URL, BLEU: 0.0, chr-F: 0.076\ntestset: URL, BLEU: 27.5, chr-F: 0.448\ntestset: URL, BLEU: 78.3, chr-F: 0.693\ntestset: URL, BLEU: 6.5, chr-F: 0.308\ntestset: URL, BLEU: 0.0, chr-F: 0.179\ntestset: URL, BLEU: 59.5, chr-F: 0.602\ntestset: URL, BLEU: 37.0, chr-F: 0.553\ntestset: URL, BLEU: 66.9, chr-F: 0.783\ntestset: URL, BLEU: 8.1, chr-F: 0.282\ntestset: URL, BLEU: 4.8, chr-F: 0.212\ntestset: URL, BLEU: 5.0, chr-F: 0.237\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 0.9, chr-F: 0.068\ntestset: URL, BLEU: 10.6, chr-F: 0.284\ntestset: URL, BLEU: 27.5, chr-F: 0.481\ntestset: URL, BLEU: 15.6, chr-F: 0.331\ntestset: URL, BLEU: 2.9, chr-F: 0.203\ntestset: URL, BLEU: 29.4, chr-F: 0.479\ntestset: URL, BLEU: 19.9, chr-F: 0.391\ntestset: URL, BLEU: 20.5, chr-F: 0.396\ntestset: URL, BLEU: 1.0, chr-F: 0.082\ntestset: URL, BLEU: 7.9, chr-F: 0.407\ntestset: URL, BLEU: 9.3, chr-F: 0.286\ntestset: URL, BLEU: 7.1, chr-F: 0.192\ntestset: URL, BLEU: 3.6, chr-F: 0.150\ntestset: URL, BLEU: 0.2, chr-F: 0.001\ntestset: URL, BLEU: 15.1, chr-F: 0.322\ntestset: URL, BLEU: 8.3, chr-F: 0.108\ntestset: URL, BLEU: 20.7, chr-F: 0.415\ntestset: URL, BLEU: 7.9, chr-F: 0.260\ntestset: URL, BLEU: 0.2, chr-F: 0.087\ntestset: URL, BLEU: 5.6, chr-F: 0.301\ntestset: URL, BLEU: 10.2, chr-F: 0.352\ntestset: URL, BLEU: 24.3, chr-F: 0.444\ntestset: URL, BLEU: 14.5, chr-F: 0.338\ntestset: URL, BLEU: 0.1, chr-F: 0.006\ntestset: URL, BLEU: 21.8, chr-F: 0.412\ntestset: URL, BLEU: 12.2, chr-F: 0.336\ntestset: URL, BLEU: 12.7, chr-F: 0.343\ntestset: URL, BLEU: 16.6, chr-F: 0.362\ntestset: URL, BLEU: 3.2, chr-F: 0.215\ntestset: URL, BLEU: 18.9, chr-F: 0.414\ntestset: URL, BLEU: 53.4, chr-F: 0.708\ntestset: URL, BLEU: 14.0, chr-F: 0.343\ntestset: URL, BLEU: 2.1, chr-F: 0.182\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 34.5, chr-F: 0.540\ntestset: URL, BLEU: 33.6, chr-F: 0.520\ntestset: URL, BLEU: 40.5, chr-F: 0.598\ntestset: URL, BLEU: 72.7, chr-F: 0.770\ntestset: URL, BLEU: 30.5, chr-F: 0.570\ntestset: URL, BLEU: 5.7, chr-F: 0.362\ntestset: URL, BLEU: 23.5, chr-F: 0.504\ntestset: URL, BLEU: 13.7, chr-F: 0.550\ntestset: URL, BLEU: 37.6, chr-F: 0.551\ntestset: URL, BLEU: 32.5, chr-F: 0.517\ntestset: URL, BLEU: 8.6, chr-F: 0.483\ntestset: URL, BLEU: 26.6, chr-F: 0.511\ntestset: URL, BLEU: 95.1, chr-F: 0.958\ntestset: URL, BLEU: 9.0, chr-F: 0.488\ntestset: URL, BLEU: 6.8, chr-F: 0.251\ntestset: URL, BLEU: 12.2, chr-F: 0.329\ntestset: URL, BLEU: 10.4, chr-F: 0.366\ntestset: URL, BLEU: 25.7, chr-F: 0.472\ntestset: URL, BLEU: 37.5, chr-F: 0.551\ntestset: URL, BLEU: 32.1, chr-F: 0.489\ntestset: URL, BLEU: 22.3, chr-F: 0.460\ntestset: URL, BLEU: 7.4, chr-F: 0.195\ntestset: URL, BLEU: 22.6, chr-F: 0.378\ntestset: URL, BLEU: 9.7, chr-F: 0.282\ntestset: URL, BLEU: 7.2, chr-F: 0.374\ntestset: URL, BLEU: 30.9, chr-F: 0.529\ntestset: URL, BLEU: 25.0, chr-F: 0.439\ntestset: URL, BLEU: 30.6, chr-F: 0.504\ntestset: URL, BLEU: 8.6, chr-F: 0.331\ntestset: URL, BLEU: 32.9, chr-F: 0.516\ntestset: URL, BLEU: 19.6, chr-F: 0.371\ntestset: URL, BLEU: 6.5, chr-F: 0.360\ntestset: URL, BLEU: 13.7, chr-F: 0.310\ntestset: URL, BLEU: 13.1, chr-F: 0.368\ntestset: URL, BLEU: 3.4, chr-F: 0.064\ntestset: URL, BLEU: 9.3, chr-F: 0.351\ntestset: URL, BLEU: 22.3, chr-F: 0.323\ntestset: URL, BLEU: 10.9, chr-F: 0.333\ntestset: URL, BLEU: 49.5, chr-F: 0.589\ntestset: URL, BLEU: 0.0, chr-F: 0.051\ntestset: URL, BLEU: 9.7, chr-F: 0.353\ntestset: URL, BLEU: 65.1, chr-F: 0.463\ntestset: URL, BLEU: 35.6, chr-F: 0.533\ntestset: URL, BLEU: 33.7, chr-F: 0.448\ntestset: URL, BLEU: 24.3, chr-F: 0.451\ntestset: URL, BLEU: 23.4, chr-F: 0.621\ntestset: URL, BLEU: 0.5, chr-F: 0.104\ntestset: URL, BLEU: 14.2, chr-F: 0.412\ntestset: URL, BLEU: 7.8, chr-F: 0.179\ntestset: URL, BLEU: 7.6, chr-F: 0.106\ntestset: URL, BLEU: 32.4, chr-F: 0.488\ntestset: URL, BLEU: 27.8, chr-F: 0.599\ntestset: URL, BLEU: 12.7, chr-F: 0.319\ntestset: URL, BLEU: 18.0, chr-F: 0.392\ntestset: URL, BLEU: 15.6, chr-F: 0.458\ntestset: URL, BLEU: 0.6, chr-F: 0.065\ntestset: URL, BLEU: 32.5, chr-F: 0.403\ntestset: URL, BLEU: 1.4, chr-F: 0.236\ntestset: URL, BLEU: 49.8, chr-F: 0.429\ntestset: URL, BLEU: 18.6, chr-F: 0.460\ntestset: URL, BLEU: 5.1, chr-F: 0.230\ntestset: URL, BLEU: 14.2, chr-F: 0.379\ntestset: URL, BLEU: 20.0, chr-F: 0.422\ntestset: URL, BLEU: 40.7, chr-F: 0.470\ntestset: URL, BLEU: 7.3, chr-F: 0.407\ntestset: URL, BLEU: 35.4, chr-F: 0.638\ntestset: URL, BLEU: 49.0, chr-F: 0.615\ntestset: URL, BLEU: 42.7, chr-F: 0.655\ntestset: URL, BLEU: 9.7, chr-F: 0.362\ntestset: URL, BLEU: 61.6, chr-F: 0.819\ntestset: URL, BLEU: 15.0, chr-F: 0.506\ntestset: URL, BLEU: 31.0, chr-F: 0.548\ntestset: URL, BLEU: 35.8, chr-F: 0.524\ntestset: URL, BLEU: 30.2, chr-F: 0.486\ntestset: URL, BLEU: 32.5, chr-F: 0.589\ntestset: URL, BLEU: 16.6, chr-F: 0.557\ntestset: URL, BLEU: 11.6, chr-F: 0.395\ntestset: URL, BLEU: 42.7, chr-F: 0.680\ntestset: URL, BLEU: 53.7, chr-F: 0.833\ntestset: URL, BLEU: 10.1, chr-F: 0.492\ntestset: URL, BLEU: 9.7, chr-F: 0.196\ntestset: URL, BLEU: 24.7, chr-F: 0.727\ntestset: URL, BLEU: 43.2, chr-F: 0.601\ntestset: URL, BLEU: 23.6, chr-F: 0.361\ntestset: URL, BLEU: 42.7, chr-F: 0.864\ntestset: URL, BLEU: 3.4, chr-F: 0.323\ntestset: URL, BLEU: 17.1, chr-F: 0.418\ntestset: URL, BLEU: 1.8, chr-F: 0.199\ntestset: URL, BLEU: 11.9, chr-F: 0.258\ntestset: URL, BLEU: 3.4, chr-F: 0.115\ntestset: URL, BLEU: 0.0, chr-F: 0.000\ntestset: URL, BLEU: 23.5, chr-F: 0.470\ntestset: URL, BLEU: 19.7, chr-F: 0.490\ntestset: URL, BLEU: 27.8, chr-F: 0.472\ntestset: URL, BLEU: 2.0, chr-F: 0.232\ntestset: URL, BLEU: 5.9, chr-F: 0.241\ntestset: URL, BLEU: 25.9, chr-F: 0.465\ntestset: URL, BLEU: 1.7, chr-F: 0.195\ntestset: URL, BLEU: 3.4, chr-F: 0.228\ntestset: URL, BLEU: 23.4, chr-F: 0.481\ntestset: URL, BLEU: 11.5, chr-F: 0.304\ntestset: URL, BLEU: 5.8, chr-F: 0.243\ntestset: URL, BLEU: 20.9, chr-F: 0.442\ntestset: URL, BLEU: 14.8, chr-F: 0.431\ntestset: URL, BLEU: 83.8, chr-F: 0.946\ntestset: URL, BLEU: 9.1, chr-F: 0.349\ntestset: URL, BLEU: 15.4, chr-F: 0.385\ntestset: URL, BLEU: 3.4, chr-F: 0.195\ntestset: URL, BLEU: 18.8, chr-F: 0.401\ntestset: URL, BLEU: 0.0, chr-F: 0.056\ntestset: URL, BLEU: 22.6, chr-F: 0.451\ntestset: URL, BLEU: 5.7, chr-F: 0.267\ntestset: URL, BLEU: 8.0, chr-F: 0.102\ntestset: URL, BLEU: 30.8, chr-F: 0.509\ntestset: URL, BLEU: 22.8, chr-F: 0.416\ntestset: URL, BLEU: 7.0, chr-F: 0.321\ntestset: URL, BLEU: 35.4, chr-F: 0.561\ntestset: URL, BLEU: 42.7, chr-F: 0.835\ntestset: URL, BLEU: 38.3, chr-F: 0.491\ntestset: URL, BLEU: 18.5, chr-F: 0.399\ntestset: URL, BLEU: 32.6, chr-F: 0.552\ntestset: URL, BLEU: 18.1, chr-F: 0.426\ntestset: URL, BLEU: 28.9, chr-F: 0.480\ntestset: URL, BLEU: 6.9, chr-F: 0.198\ntestset: URL, BLEU: 6.6, chr-F: 0.187\ntestset: URL, BLEU: 31.9, chr-F: 0.498\ntestset: URL, BLEU: 0.5, chr-F: 0.000\ntestset: URL, BLEU: 0.0, chr-F: 0.023\ntestset: URL, BLEU: 1.2, chr-F: 0.148\ntestset: URL, BLEU: 28.5, chr-F: 0.505\ntestset: URL, BLEU: 7.8, chr-F: 0.164\ntestset: URL, BLEU: 38.2, chr-F: 0.584\ntestset: URL, BLEU: 42.8, chr-F: 0.612\ntestset: URL, BLEU: 15.3, chr-F: 0.405\ntestset: URL, BLEU: 26.0, chr-F: 0.447\ntestset: URL, BLEU: 0.0, chr-F: 0.353\ntestset: URL, BLEU: 24.3, chr-F: 0.440\ntestset: URL, BLEU: 31.7, chr-F: 0.527\ntestset: URL, BLEU: 0.1, chr-F: 0.080\ntestset: URL, BLEU: 20.1, chr-F: 0.464\ntestset: URL, BLEU: 42.8, chr-F: 0.365\ntestset: URL, BLEU: 2.1, chr-F: 0.161\ntestset: URL, BLEU: 50.1, chr-F: 0.670\ntestset: URL, BLEU: 42.7, chr-F: 0.835\ntestset: URL, BLEU: 17.5, chr-F: 0.410\ntestset: URL, BLEU: 3.2, chr-F: 0.189\ntestset: URL, BLEU: 28.7, chr-F: 0.468\ntestset: URL, BLEU: 31.9, chr-F: 0.546\ntestset: URL, BLEU: 24.4, chr-F: 0.504\ntestset: URL, BLEU: 0.6, chr-F: 0.048\ntestset: URL, BLEU: 49.1, chr-F: 0.660\ntestset: URL, BLEU: 38.3, chr-F: 0.589\ntestset: URL, BLEU: 0.2, chr-F: 0.084\ntestset: URL, BLEU: 35.3, chr-F: 0.528\ntestset: URL, BLEU: 42.4, chr-F: 0.602\ntestset: URL, BLEU: 6.1, chr-F: 0.269\ntestset: URL, BLEU: 18.6, chr-F: 0.459\ntestset: URL, BLEU: 35.7, chr-F: 0.549\ntestset: URL, BLEU: 2.8, chr-F: 0.099\ntestset: URL, BLEU: 19.2, chr-F: 0.438\ntestset: URL, BLEU: 35.0, chr-F: 0.576\ntestset: URL, BLEU: 0.5, chr-F: 0.129\ntestset: URL, BLEU: 26.8, chr-F: 0.418\ntestset: URL, BLEU: 35.3, chr-F: 0.580\ntestset: URL, BLEU: 4.2, chr-F: 0.147\ntestset: URL, BLEU: 0.7, chr-F: 0.101\ntestset: URL, BLEU: 6.7, chr-F: 0.314\ntestset: URL, BLEU: 17.6, chr-F: 0.384\ntestset: URL, BLEU: 0.0, chr-F: 0.238\ntestset: URL, BLEU: 3.6, chr-F: 0.210\ntestset: URL, BLEU: 15.9, chr-F: 0.405\ntestset: URL, BLEU: 42.4, chr-F: 0.618\ntestset: URL, BLEU: 9.0, chr-F: 0.306\ntestset: URL, BLEU: 38.9, chr-F: 0.531\ntestset: URL, BLEU: 25.8, chr-F: 0.498\ntestset: URL, BLEU: 31.7, chr-F: 0.535\ntestset: URL, BLEU: 26.6, chr-F: 0.495\ntestset: URL, BLEU: 30.0, chr-F: 0.512\ntestset: URL, BLEU: 4.3, chr-F: 0.299\ntestset: URL, BLEU: 35.0, chr-F: 0.560\ntestset: URL, BLEU: 1.6, chr-F: 0.201\ntestset: URL, BLEU: 72.2, chr-F: 0.801\ntestset: URL, BLEU: 5.0, chr-F: 0.129\ntestset: URL, BLEU: 26.2, chr-F: 0.481\ntestset: URL, BLEU: 3.5, chr-F: 0.133\ntestset: URL, BLEU: 11.5, chr-F: 0.293\ntestset: URL, BLEU: 30.3, chr-F: 0.471\ntestset: URL, BLEU: 90.1, chr-F: 0.839\ntestset: URL, BLEU: 50.0, chr-F: 0.638\ntestset: URL, BLEU: 42.2, chr-F: 0.467\ntestset: URL, BLEU: 3.2, chr-F: 0.188\ntestset: URL, BLEU: 35.4, chr-F: 0.529\ntestset: URL, BLEU: 38.0, chr-F: 0.627\ntestset: URL, BLEU: 3.2, chr-F: 0.072\ntestset: URL, BLEU: 14.7, chr-F: 0.465\ntestset: URL, BLEU: 59.0, chr-F: 0.757\ntestset: URL, BLEU: 32.4, chr-F: 0.560\ntestset: URL, BLEU: 29.9, chr-F: 0.507\ntestset: URL, BLEU: 40.8, chr-F: 0.585\ntestset: URL, BLEU: 4.2, chr-F: 0.303\ntestset: URL, BLEU: 10.0, chr-F: 0.345\ntestset: URL, BLEU: 38.4, chr-F: 0.572\ntestset: URL, BLEU: 18.7, chr-F: 0.375\ntestset: URL, BLEU: 10.7, chr-F: 0.015\ntestset: URL, BLEU: 21.7, chr-F: 0.465\ntestset: URL, BLEU: 14.8, chr-F: 0.307\ntestset: URL, BLEU: 23.2, chr-F: 0.445\ntestset: URL, BLEU: 35.2, chr-F: 0.594\ntestset: URL, BLEU: 10.7, chr-F: 0.037\ntestset: URL, BLEU: 6.6, chr-F: 0.370\ntestset: URL, BLEU: 3.6, chr-F: 0.261\ntestset: URL, BLEU: 12.2, chr-F: 0.404\ntestset: URL, BLEU: 8.0, chr-F: 0.442\ntestset: URL, BLEU: 20.3, chr-F: 0.466\ntestset: URL, BLEU: 39.1, chr-F: 0.598\ntestset: URL, BLEU: 49.0, chr-F: 0.698\ntestset: URL, BLEU: 26.3, chr-F: 0.515\ntestset: URL, BLEU: 31.0, chr-F: 0.543\ntestset: URL, BLEU: 28.0, chr-F: 0.475\ntestset: URL, BLEU: 28.1, chr-F: 0.513\ntestset: URL, BLEU: 1.2, chr-F: 0.193\ntestset: URL, BLEU: 38.2, chr-F: 0.598\ntestset: URL, BLEU: 58.8, chr-F: 0.741\ntestset: URL, BLEU: 29.1, chr-F: 0.515\ntestset: URL, BLEU: 42.6, chr-F: 0.473\ntestset: URL, BLEU: 11.2, chr-F: 0.346\ntestset: URL, BLEU: 13.4, chr-F: 0.331\ntestset: URL, BLEU: 5.3, chr-F: 0.206\ntestset: URL, BLEU: 19.6, chr-F: 0.423\ntestset: URL, BLEU: 24.5, chr-F: 0.493\ntestset: URL, BLEU: 22.5, chr-F: 0.408\ntestset: URL, BLEU: 8.8, chr-F: 0.322\ntestset: URL, BLEU: 16.4, chr-F: 0.387\ntestset: URL, BLEU: 20.4, chr-F: 0.442\ntestset: URL, BLEU: 66.9, chr-F: 0.968\ntestset: URL, BLEU: 3.9, chr-F: 0.168\ntestset: URL, BLEU: 9.1, chr-F: 0.175\ntestset: URL, BLEU: 5.8, chr-F: 0.256\ntestset: URL, BLEU: 8.4, chr-F: 0.243\ntestset: URL, BLEU: 8.9, chr-F: 0.244\ntestset: URL, BLEU: 8.1, chr-F: 0.297\ntestset: URL, BLEU: 1.2, chr-F: 0.207\ntestset: URL, BLEU: 11.6, chr-F: 0.338\ntestset: URL, BLEU: 8.2, chr-F: 0.234\ntestset: URL, BLEU: 7.8, chr-F: 0.331\ntestset: URL, BLEU: 6.4, chr-F: 0.217\ntestset: URL, BLEU: 5.8, chr-F: 0.230\ntestset: URL, BLEU: 10.8, chr-F: 0.279\ntestset: URL, BLEU: 6.0, chr-F: 0.225\ntestset: URL, BLEU: 6.1, chr-F: 0.256\ntestset: URL, BLEU: 0.0, chr-F: 0.626\ntestset: URL, BLEU: 45.7, chr-F: 0.586\ntestset: URL, BLEU: 43.9, chr-F: 0.589\ntestset: URL, BLEU: 0.0, chr-F: 0.347\ntestset: URL, BLEU: 41.9, chr-F: 0.587\ntestset: URL, BLEU: 14.4, chr-F: 0.365\ntestset: URL, BLEU: 5.8, chr-F: 0.274\ntestset: URL, BLEU: 33.0, chr-F: 0.474\ntestset: URL, BLEU: 36.1, chr-F: 0.479\ntestset: URL, BLEU: 0.7, chr-F: 0.026\ntestset: URL, BLEU: 13.1, chr-F: 0.310\ntestset: URL, BLEU: 8.8, chr-F: 0.296\ntestset: URL, BLEU: 13.0, chr-F: 0.309\ntestset: URL, BLEU: 10.0, chr-F: 0.327\ntestset: URL, BLEU: 15.2, chr-F: 0.304\ntestset: URL, BLEU: 10.4, chr-F: 0.352\ntestset: URL, BLEU: 40.2, chr-F: 0.589\ntestset: URL, BLEU: 24.8, chr-F: 0.503\ntestset: URL, BLEU: 29.4, chr-F: 0.508\ntestset: URL, BLEU: 20.3, chr-F: 0.416\ntestset: URL, BLEU: 28.0, chr-F: 0.489\ntestset: URL, BLEU: 1.3, chr-F: 0.052\ntestset: URL, BLEU: 7.0, chr-F: 0.347\ntestset: URL, BLEU: 37.0, chr-F: 0.551\ntestset: URL, BLEU: 29.1, chr-F: 0.508\ntestset: URL, BLEU: 0.8, chr-F: 0.070\ntestset: URL, BLEU: 32.3, chr-F: 0.519\ntestset: URL, BLEU: 34.1, chr-F: 0.531\ntestset: URL, BLEU: 1.2, chr-F: 0.234\ntestset: URL, BLEU: 6.5, chr-F: 0.208\ntestset: URL, BLEU: 30.8, chr-F: 0.510\ntestset: URL, BLEU: 7.2, chr-F: 0.287\ntestset: URL, BLEU: 14.6, chr-F: 0.301\ntestset: URL, BLEU: 18.4, chr-F: 0.498\ntestset: URL, BLEU: 31.8, chr-F: 0.546\ntestset: URL, BLEU: 3.5, chr-F: 0.193\ntestset: URL, BLEU: 11.4, chr-F: 0.336\ntestset: URL, BLEU: 28.5, chr-F: 0.522\ntestset: URL, BLEU: 2.6, chr-F: 0.134\ntestset: URL, BLEU: 16.0, chr-F: 0.265\ntestset: URL, BLEU: 7.2, chr-F: 0.311\ntestset: URL, BLEU: 22.9, chr-F: 0.450\ntestset: URL, BLEU: 21.2, chr-F: 0.493\ntestset: URL, BLEU: 38.0, chr-F: 0.718\ntestset: URL, BLEU: 2.2, chr-F: 0.173\ntestset: URL, BLEU: 14.4, chr-F: 0.370\ntestset: URL, BLEU: 30.6, chr-F: 0.501\ntestset: URL, BLEU: 33.3, chr-F: 0.536\ntestset: URL, BLEU: 4.0, chr-F: 0.282\ntestset: URL, BLEU: 0.4, chr-F: 0.005\ntestset: URL, BLEU: 1.3, chr-F: 0.032\ntestset: URL, BLEU: 25.9, chr-F: 0.491\ntestset: URL, BLEU: 0.0, chr-F: 0.083\ntestset: URL, BLEU: 26.5, chr-F: 0.487\ntestset: URL, BLEU: 34.7, chr-F: 0.550\ntestset: URL, BLEU: 7.4, chr-F: 0.256\ntestset: URL, BLEU: 30.7, chr-F: 0.516\ntestset: URL, BLEU: 35.0, chr-F: 0.530\ntestset: URL, BLEU: 32.8, chr-F: 0.538\ntestset: URL, BLEU: 5.6, chr-F: 0.381\ntestset: URL, BLEU: 4.8, chr-F: 0.146\ntestset: URL, BLEU: 48.1, chr-F: 0.653\ntestset: URL, BLEU: 8.4, chr-F: 0.213\ntestset: URL, BLEU: 42.7, chr-F: 0.835\ntestset: URL, BLEU: 9.7, chr-F: 0.539\ntestset: URL, BLEU: 41.5, chr-F: 0.569\ntestset: URL, BLEU: 36.9, chr-F: 0.612\ntestset: URL, BLEU: 29.0, chr-F: 0.526\ntestset: URL, BLEU: 0.8, chr-F: 0.049\ntestset: URL, BLEU: 51.4, chr-F: 0.668\ntestset: URL, BLEU: 30.8, chr-F: 0.532\ntestset: URL, BLEU: 33.8, chr-F: 0.556\ntestset: URL, BLEU: 44.5, chr-F: 0.622\ntestset: URL, BLEU: 10.7, chr-F: 0.190\ntestset: URL, BLEU: 4.5, chr-F: 0.273\ntestset: URL, BLEU: 43.0, chr-F: 0.625\ntestset: URL, BLEU: 8.9, chr-F: 0.365\ntestset: URL, BLEU: 16.0, chr-F: 0.079\ntestset: URL, BLEU: 12.1, chr-F: 0.315\ntestset: URL, BLEU: 49.2, chr-F: 0.700\ntestset: URL, BLEU: 0.1, chr-F: 0.004\ntestset: URL, BLEU: 39.2, chr-F: 0.575\ntestset: URL, BLEU: 15.5, chr-F: 0.387\ntestset: URL, BLEU: 39.9, chr-F: 0.637\ntestset: URL, BLEU: 3.0, chr-F: 0.133\ntestset: URL, BLEU: 0.6, chr-F: 0.172\ntestset: URL, BLEU: 5.4, chr-F: 0.325\ntestset: URL, BLEU: 18.8, chr-F: 0.418\ntestset: URL, BLEU: 16.8, chr-F: 0.569\ntestset: URL, BLEU: 27.3, chr-F: 0.571\ntestset: URL, BLEU: 7.6, chr-F: 0.327\ntestset: URL, BLEU: 30.5, chr-F: 0.559\ntestset: URL, BLEU: 14.2, chr-F: 0.370\ntestset: URL, BLEU: 35.6, chr-F: 0.558\ntestset: URL, BLEU: 38.0, chr-F: 0.587\ntestset: URL, BLEU: 25.5, chr-F: 0.510\ntestset: URL, BLEU: 5.5, chr-F: 0.058\ntestset: URL, BLEU: 32.0, chr-F: 0.557\ntestset: URL, BLEU: 26.8, chr-F: 0.493\ntestset: URL, BLEU: 48.7, chr-F: 0.686\ntestset: URL, BLEU: 43.4, chr-F: 0.612\ntestset: URL, BLEU: 27.5, chr-F: 0.500\ntestset: URL, BLEU: 9.3, chr-F: 0.293\ntestset: URL, BLEU: 2.2, chr-F: 0.183\ntestset: URL, BLEU: 1.3, chr-F: 0.179\ntestset: URL, BLEU: 2.3, chr-F: 0.183\ntestset: URL, BLEU: 0.5, chr-F: 0.173\ntestset: URL, BLEU: 3.4, chr-F: 0.200\ntestset: URL, BLEU: 1.6, chr-F: 0.166\ntestset: URL, BLEU: 8.3, chr-F: 0.311\ntestset: URL, BLEU: 9.5, chr-F: 0.361\ntestset: URL, BLEU: 8.8, chr-F: 0.415\ntestset: URL, BLEU: 21.4, chr-F: 0.347\ntestset: URL, BLEU: 13.3, chr-F: 0.434\ntestset: URL, BLEU: 2.9, chr-F: 0.204\ntestset: URL, BLEU: 5.3, chr-F: 0.243\ntestset: URL, BLEU: 6.5, chr-F: 0.194\ntestset: URL, BLEU: 30.2, chr-F: 0.667\ntestset: URL, BLEU: 35.4, chr-F: 0.493\ntestset: URL, BLEU: 23.6, chr-F: 0.542\ntestset: URL, BLEU: 10.6, chr-F: 0.344\ntestset: URL, BLEU: 12.7, chr-F: 0.652\ntestset: URL, BLEU: 32.1, chr-F: 0.524\ntestset: URL, BLEU: 38.4, chr-F: 0.566\ntestset: URL, BLEU: 5.3, chr-F: 0.351\ntestset: URL, BLEU: 7.3, chr-F: 0.338\ntestset: URL, BLEU: 38.0, chr-F: 0.571\ntestset: URL, BLEU: 10.7, chr-F: 0.116\ntestset: URL, BLEU: 36.2, chr-F: 0.587\ntestset: URL, BLEU: 2.4, chr-F: 0.233\ntestset: URL, BLEU: 6.5, chr-F: 0.368\ntestset: URL, BLEU: 27.5, chr-F: 0.484\ntestset: URL, BLEU: 0.8, chr-F: 0.082\ntestset: URL, BLEU: 9.7, chr-F: 0.168\ntestset: URL, BLEU: 32.5, chr-F: 0.522\ntestset: URL, BLEU: 45.2, chr-F: 0.656\ntestset: URL, BLEU: 32.2, chr-F: 0.554\ntestset: URL, BLEU: 33.6, chr-F: 0.577\ntestset: URL, BLEU: 33.3, chr-F: 0.536\ntestset: URL, BLEU: 19.0, chr-F: 0.113\ntestset: URL, BLEU: 40.8, chr-F: 0.605\ntestset: URL, BLEU: 12.7, chr-F: 0.288\ntestset: URL, BLEU: 19.7, chr-F: 0.285\ntestset: URL, BLEU: 18.7, chr-F: 0.359\ntestset: URL, BLEU: 30.1, chr-F: 0.455\ntestset: URL, BLEU: 34.7, chr-F: 0.540\ntestset: URL, BLEU: 0.0, chr-F: 0.042\ntestset: URL, BLEU: 42.7, chr-F: 0.835\ntestset: URL, BLEU: 35.0, chr-F: 0.587\ntestset: URL, BLEU: 30.8, chr-F: 0.534\ntestset: URL, BLEU: 27.9, chr-F: 0.512\ntestset: URL, BLEU: 33.8, chr-F: 0.537\ntestset: URL, BLEU: 0.4, chr-F: 0.038\ntestset: URL, BLEU: 7.6, chr-F: 0.384\ntestset: URL, BLEU: 37.9, chr-F: 0.559\ntestset: URL, BLEU: 31.3, chr-F: 0.528\ntestset: URL, BLEU: 16.0, chr-F: 0.060\ntestset: URL, BLEU: 29.0, chr-F: 0.512\ntestset: URL, BLEU: 37.6, chr-F: 0.553\ntestset: URL, BLEU: 1.6, chr-F: 0.138\ntestset: URL, BLEU: 4.2, chr-F: 0.278\ntestset: URL, BLEU: 33.0, chr-F: 0.524\ntestset: URL, BLEU: 16.3, chr-F: 0.308\ntestset: URL, BLEU: 10.7, chr-F: 0.045\ntestset: URL, BLEU: 22.3, chr-F: 0.427\ntestset: URL, BLEU: 5.9, chr-F: 0.310\ntestset: URL, BLEU: 20.6, chr-F: 0.459\ntestset: URL, BLEU: 1.5, chr-F: 0.152\ntestset: URL, BLEU: 31.0, chr-F: 0.546\ntestset: URL, BLEU: 5.5, chr-F: 0.326\ntestset: URL, BLEU: 12.7, chr-F: 0.365\ntestset: URL, BLEU: 9.0, chr-F: 0.320\ntestset: URL, BLEU: 26.6, chr-F: 0.495\ntestset: URL, BLEU: 5.6, chr-F: 0.210\ntestset: URL, BLEU: 1.0, chr-F: 0.169\ntestset: URL, BLEU: 7.9, chr-F: 0.328\ntestset: URL, BLEU: 31.1, chr-F: 0.519\ntestset: URL, BLEU: 22.0, chr-F: 0.489\ntestset: URL, BLEU: 19.4, chr-F: 0.263\ntestset: URL, BLEU: 19.0, chr-F: 0.217\ntestset: URL, BLEU: 38.5, chr-F: 0.662\ntestset: URL, BLEU: 6.6, chr-F: 0.305\ntestset: URL, BLEU: 11.5, chr-F: 0.350\ntestset: URL, BLEU: 31.1, chr-F: 0.517\ntestset: URL, BLEU: 31.2, chr-F: 0.528\ntestset: URL, BLEU: 4.9, chr-F: 0.261\ntestset: URL, BLEU: 7.3, chr-F: 0.325\ntestset: URL, BLEU: 0.0, chr-F: 0.008\ntestset: URL, BLEU: 4.8, chr-F: 0.198\ntestset: URL, BLEU: 31.3, chr-F: 0.540\ntestset: URL, BLEU: 24.5, chr-F: 0.476\ntestset: URL, BLEU: 25.7, chr-F: 0.492\ntestset: URL, BLEU: 20.7, chr-F: 0.400\ntestset: URL, BLEU: 30.9, chr-F: 0.526\ntestset: URL, BLEU: 32.0, chr-F: 0.507\ntestset: URL, BLEU: 41.1, chr-F: 0.622\ntestset: URL, BLEU: 7.1, chr-F: 0.367\ntestset: URL, BLEU: 4.7, chr-F: 0.253\ntestset: URL, BLEU: 2.5, chr-F: 0.167\ntestset: URL, BLEU: 11.7, chr-F: 0.217\ntestset: URL, BLEU: 3.9, chr-F: 0.224\ntestset: URL, BLEU: 40.7, chr-F: 0.420\ntestset: URL, BLEU: 2.1, chr-F: 0.134\ntestset: URL, BLEU: 3.4, chr-F: 0.244\ntestset: URL, BLEU: 17.2, chr-F: 0.310\ntestset: URL, BLEU: 32.8, chr-F: 0.524\ntestset: URL, BLEU: 5.7, chr-F: 0.254\ntestset: URL, BLEU: 5.3, chr-F: 0.023\ntestset: URL, BLEU: 3.5, chr-F: 0.237\ntestset: URL, BLEU: 11.9, chr-F: 0.335\ntestset: URL, BLEU: 23.7, chr-F: 0.300\ntestset: URL, BLEU: 0.0, chr-F: 0.146\ntestset: URL, BLEU: 14.1, chr-F: 0.313\ntestset: URL, BLEU: 33.2, chr-F: 0.528\ntestset: URL, BLEU: 33.4, chr-F: 0.518\ntestset: URL, BLEU: 29.9, chr-F: 0.489\ntestset: URL, BLEU: 19.5, chr-F: 0.405\ntestset: URL, BLEU: 28.6, chr-F: 0.499\ntestset: URL, BLEU: 5.5, chr-F: 0.296\ntestset: URL, BLEU: 18.0, chr-F: 0.546\ntestset: URL, BLEU: 18.0, chr-F: 0.452\ntestset: URL, BLEU: 20.3, chr-F: 0.406\ntestset: URL, BLEU: 33.1, chr-F: 0.541\ntestset: URL, BLEU: 12.4, chr-F: 0.348\ntestset: URL, BLEU: 33.4, chr-F: 0.519\ntestset: URL, BLEU: 32.9, chr-F: 0.503\ntestset: URL, BLEU: 14.8, chr-F: 0.095\ntestset: URL, BLEU: 30.1, chr-F: 0.471\ntestset: URL, BLEU: 12.7, chr-F: 0.377\ntestset: URL, BLEU: 46.9, chr-F: 0.624\ntestset: URL, BLEU: 1.1, chr-F: 0.143\ntestset: URL, BLEU: 21.6, chr-F: 0.446\ntestset: URL, BLEU: 28.1, chr-F: 0.526\ntestset: URL, BLEU: 22.8, chr-F: 0.466\ntestset: URL, BLEU: 16.9, chr-F: 0.442\ntestset: URL, BLEU: 30.8, chr-F: 0.510\ntestset: URL, BLEU: 49.1, chr-F: 0.696\ntestset: URL, BLEU: 27.2, chr-F: 0.497\ntestset: URL, BLEU: 0.5, chr-F: 0.049\ntestset: URL, BLEU: 5.3, chr-F: 0.204\ntestset: URL, BLEU: 22.4, chr-F: 0.476\ntestset: URL, BLEU: 39.3, chr-F: 0.581\ntestset: URL, BLEU: 30.9, chr-F: 0.531\ntestset: URL, BLEU: 0.7, chr-F: 0.109\ntestset: URL, BLEU: 0.9, chr-F: 0.060\ntestset: URL, BLEU: 28.9, chr-F: 0.487\ntestset: URL, BLEU: 41.0, chr-F: 0.595\ntestset: URL, BLEU: 13.9, chr-F: 0.188\ntestset: URL, BLEU: 7.9, chr-F: 0.244\ntestset: URL, BLEU: 41.4, chr-F: 0.610\ntestset: URL, BLEU: 15.8, chr-F: 0.397\ntestset: URL, BLEU: 7.0, chr-F: 0.060\ntestset: URL, BLEU: 7.4, chr-F: 0.303\ntestset: URL, BLEU: 22.2, chr-F: 0.415\ntestset: URL, BLEU: 48.8, chr-F: 0.683\ntestset: URL, BLEU: 1.7, chr-F: 0.181\ntestset: URL, BLEU: 0.3, chr-F: 0.010\ntestset: URL, BLEU: 0.1, chr-F: 0.005\ntestset: URL, BLEU: 5.6, chr-F: 0.051\ntestset: URL, BLEU: 15.0, chr-F: 0.365\ntestset: URL, BLEU: 19.9, chr-F: 0.409\ntestset: URL, BLEU: 33.2, chr-F: 0.529\ntestset: URL, BLEU: 16.1, chr-F: 0.331\ntestset: URL, BLEU: 5.1, chr-F: 0.240\ntestset: URL, BLEU: 13.5, chr-F: 0.357\ntestset: URL, BLEU: 18.0, chr-F: 0.410\ntestset: URL, BLEU: 42.7, chr-F: 0.646\ntestset: URL, BLEU: 0.4, chr-F: 0.088\ntestset: URL, BLEU: 5.6, chr-F: 0.237\ntestset: URL, BLEU: 0.9, chr-F: 0.157\ntestset: URL, BLEU: 9.0, chr-F: 0.382\ntestset: URL, BLEU: 23.7, chr-F: 0.510\ntestset: URL, BLEU: 22.4, chr-F: 0.477\ntestset: URL, BLEU: 0.4, chr-F: 0.119\ntestset: URL, BLEU: 34.1, chr-F: 0.531\ntestset: URL, BLEU: 29.4, chr-F: 0.416\ntestset: URL, BLEU: 37.1, chr-F: 0.568\ntestset: URL, BLEU: 14.0, chr-F: 0.405\ntestset: URL, BLEU: 15.4, chr-F: 0.390\ntestset: URL, BLEU: 34.0, chr-F: 0.550\ntestset: URL, BLEU: 41.1, chr-F: 0.608\ntestset: URL, BLEU: 8.0, chr-F: 0.353\ntestset: URL, BLEU: 0.4, chr-F: 0.010\ntestset: URL, BLEU: 0.2, chr-F: 0.060\ntestset: URL, BLEU: 0.6, chr-F: 0.122\ntestset: URL, BLEU: 26.3, chr-F: 0.498\ntestset: URL, BLEU: 41.6, chr-F: 0.638\ntestset: URL, BLEU: 0.3, chr-F: 0.095\ntestset: URL, BLEU: 4.0, chr-F: 0.219\ntestset: URL, BLEU: 31.9, chr-F: 0.550\ntestset: URL, BLEU: 0.2, chr-F: 0.013\ntestset: URL, BLEU: 29.4, chr-F: 0.510\ntestset: URL, BLEU: 1.6, chr-F: 0.086\ntestset: URL, BLEU: 16.0, chr-F: 0.111\ntestset: URL, BLEU: 9.2, chr-F: 0.269\ntestset: URL, BLEU: 8.4, chr-F: 0.375\ntestset: URL, BLEU: 39.5, chr-F: 0.572\ntestset: URL, BLEU: 27.8, chr-F: 0.495\ntestset: URL, BLEU: 2.9, chr-F: 0.220\ntestset: URL, BLEU: 10.0, chr-F: 0.296\ntestset: URL, BLEU: 30.9, chr-F: 0.499\ntestset: URL, BLEU: 29.9, chr-F: 0.545\ntestset: URL, BLEU: 24.5, chr-F: 0.484\ntestset: URL, BLEU: 5.8, chr-F: 0.347\ntestset: URL, BLEU: 16.7, chr-F: 0.426\ntestset: URL, BLEU: 8.4, chr-F: 0.370\ntestset: URL, BLEU: 0.6, chr-F: 0.032\ntestset: URL, BLEU: 9.3, chr-F: 0.283\ntestset: URL, BLEU: 0.3, chr-F: 0.126\ntestset: URL, BLEU: 0.0, chr-F: 0.102\ntestset: URL, BLEU: 4.0, chr-F: 0.175\ntestset: URL, BLEU: 13.2, chr-F: 0.398\ntestset: URL, BLEU: 7.0, chr-F: 0.345\ntestset: URL, BLEU: 5.0, chr-F: 0.110\ntestset: URL, BLEU: 63.1, chr-F: 0.831\ntestset: URL, BLEU: 35.4, chr-F: 0.529\ntestset: URL, BLEU: 38.5, chr-F: 0.528\ntestset: URL, BLEU: 32.8, chr-F: 0.380\ntestset: URL, BLEU: 54.5, chr-F: 0.702\ntestset: URL, BLEU: 36.7, chr-F: 0.570\ntestset: URL, BLEU: 32.9, chr-F: 0.541\ntestset: URL, BLEU: 44.9, chr-F: 0.606\ntestset: URL, BLEU: 0.0, chr-F: 0.877\ntestset: URL, BLEU: 43.2, chr-F: 0.605\ntestset: URL, BLEU: 42.7, chr-F: 0.402\ntestset: URL, BLEU: 4.8, chr-F: 0.253\ntestset: URL, BLEU: 39.3, chr-F: 0.591\ntestset: URL, BLEU: 31.6, chr-F: 0.617\ntestset: URL, BLEU: 21.2, chr-F: 0.559\ntestset: URL, BLEU: 33.1, chr-F: 0.548\ntestset: URL, BLEU: 1.4, chr-F: 0.144\ntestset: URL, BLEU: 6.6, chr-F: 0.373\ntestset: URL, BLEU: 4.5, chr-F: 0.453\ntestset: URL, BLEU: 73.4, chr-F: 0.828\ntestset: URL, BLEU: 25.5, chr-F: 0.440\ntestset: URL, BLEU: 0.0, chr-F: 0.124\ntestset: URL, BLEU: 71.9, chr-F: 0.742\ntestset: URL, BLEU: 59.5, chr-F: 0.742\ntestset: URL, BLEU: 25.9, chr-F: 0.497\ntestset: URL, BLEU: 31.3, chr-F: 0.546\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 28.6, chr-F: 0.495\ntestset: URL, BLEU: 19.0, chr-F: 0.116\ntestset: URL, BLEU: 37.1, chr-F: 0.569\ntestset: URL, BLEU: 13.9, chr-F: 0.336\ntestset: URL, BLEU: 16.5, chr-F: 0.438\ntestset: URL, BLEU: 20.1, chr-F: 0.468\ntestset: URL, BLEU: 8.0, chr-F: 0.316\ntestset: URL, BLEU: 13.0, chr-F: 0.300\ntestset: URL, BLEU: 15.3, chr-F: 0.296\ntestset: URL, BLEU: 0.9, chr-F: 0.199\ntestset: URL, BLEU: 4.9, chr-F: 0.287\ntestset: URL, BLEU: 1.9, chr-F: 0.194\ntestset: URL, BLEU: 45.2, chr-F: 0.574\ntestset: URL, BLEU: 7.8, chr-F: 0.271\ntestset: URL, BLEU: 9.6, chr-F: 0.273\ntestset: URL, BLEU: 0.9, chr-F: 0.102\ntestset: URL, BLEU: 4.4, chr-F: 0.054\ntestset: URL, BLEU: 48.3, chr-F: 0.646\ntestset: URL, BLEU: 1.4, chr-F: 0.034\ntestset: URL, BLEU: 36.7, chr-F: 0.601\ntestset: URL, BLEU: 40.4, chr-F: 0.601\ntestset: URL, BLEU: 33.9, chr-F: 0.538\ntestset: URL, BLEU: 33.1, chr-F: 0.524\ntestset: URL, BLEU: 25.8, chr-F: 0.469\ntestset: URL, BLEU: 34.0, chr-F: 0.543\ntestset: URL, BLEU: 23.0, chr-F: 0.493\ntestset: URL, BLEU: 36.1, chr-F: 0.538\ntestset: URL, BLEU: 3.6, chr-F: 0.400\ntestset: URL, BLEU: 5.3, chr-F: 0.240\ntestset: URL, BLEU: 32.0, chr-F: 0.519\ntestset: URL, BLEU: 13.6, chr-F: 0.318\ntestset: URL, BLEU: 3.8, chr-F: 0.199\ntestset: URL, BLEU: 33.4, chr-F: 0.547\ntestset: URL, BLEU: 32.6, chr-F: 0.546\ntestset: URL, BLEU: 1.4, chr-F: 0.166\ntestset: URL, BLEU: 8.0, chr-F: 0.314\ntestset: URL, BLEU: 10.7, chr-F: 0.520\ntestset: URL, BLEU: 59.9, chr-F: 0.631\ntestset: URL, BLEU: 38.0, chr-F: 0.718\ntestset: URL, BLEU: 2.5, chr-F: 0.213\ntestset: URL, BLEU: 11.0, chr-F: 0.368\ntestset: URL, BLEU: 33.0, chr-F: 0.524\ntestset: URL, BLEU: 40.4, chr-F: 0.574\ntestset: URL, BLEU: 0.1, chr-F: 0.008\ntestset: URL, BLEU: 32.7, chr-F: 0.553\ntestset: URL, BLEU: 26.8, chr-F: 0.496\ntestset: URL, BLEU: 45.7, chr-F: 0.651\ntestset: URL, BLEU: 11.8, chr-F: 0.263\ntestset: URL, BLEU: 31.7, chr-F: 0.528\ntestset: URL, BLEU: 3.6, chr-F: 0.196\ntestset: URL, BLEU: 36.7, chr-F: 0.586\ntestset: URL, BLEU: 17.1, chr-F: 0.451\ntestset: URL, BLEU: 17.1, chr-F: 0.375\ntestset: URL, BLEU: 38.1, chr-F: 0.565\ntestset: URL, BLEU: 0.0, chr-F: 1.000\ntestset: URL, BLEU: 14.0, chr-F: 0.404\ntestset: URL, BLEU: 1.5, chr-F: 0.014\ntestset: URL, BLEU: 68.7, chr-F: 0.695\ntestset: URL, BLEU: 25.8, chr-F: 0.314\ntestset: URL, BLEU: 13.6, chr-F: 0.319\ntestset: URL, BLEU: 48.3, chr-F: 0.680\ntestset: URL, BLEU: 28.3, chr-F: 0.454\ntestset: URL, BLEU: 4.4, chr-F: 0.206\ntestset: URL, BLEU: 8.0, chr-F: 0.282\ntestset: URL, BLEU: 5.2, chr-F: 0.237\ntestset: URL, BLEU: 9.9, chr-F: 0.395\ntestset: URL, BLEU: 35.4, chr-F: 0.868\ntestset: URL, BLEU: 0.8, chr-F: 0.077\ntestset: URL, BLEU: 4.9, chr-F: 0.240\ntestset: URL, BLEU: 11.3, chr-F: 0.054\ntestset: URL, BLEU: 19.0, chr-F: 0.583\ntestset: URL, BLEU: 5.4, chr-F: 0.320\ntestset: URL, BLEU: 6.3, chr-F: 0.239\ntestset: URL, BLEU: 12.8, chr-F: 0.341\ntestset: URL, BLEU: 17.5, chr-F: 0.382\ntestset: URL, BLEU: 42.7, chr-F: 0.797\ntestset: URL, BLEU: 15.5, chr-F: 0.338\ntestset: URL, BLEU: 2.3, chr-F: 0.176\ntestset: URL, BLEU: 4.5, chr-F: 0.207\ntestset: URL, BLEU: 18.9, chr-F: 0.367\ntestset: URL, BLEU: 6.0, chr-F: 0.156\ntestset: URL, BLEU: 32.2, chr-F: 0.448\ntestset: URL, BLEU: 1.3, chr-F: 0.142\ntestset: URL, BLEU: 15.3, chr-F: 0.363\ntestset: URL, BLEU: 3.2, chr-F: 0.166\ntestset: URL, BLEU: 0.1, chr-F: 0.090\ntestset: URL, BLEU: 1.8, chr-F: 0.206\ntestset: URL, BLEU: 27.8, chr-F: 0.560\ntestset: URL, BLEU: 4.2, chr-F: 0.316\ntestset: URL, BLEU: 24.6, chr-F: 0.466\ntestset: URL, BLEU: 24.5, chr-F: 0.431\ntestset: URL, BLEU: 5.0, chr-F: 0.318\ntestset: URL, BLEU: 19.0, chr-F: 0.390\ntestset: URL, BLEU: 15.0, chr-F: 0.258\ntestset: URL, BLEU: 7.4, chr-F: 0.326\ntestset: URL, BLEU: 12.3, chr-F: 0.325\ntestset: URL, BLEU: 14.2, chr-F: 0.324\ntestset: URL, BLEU: 16.1, chr-F: 0.369\ntestset: URL, BLEU: 3.2, chr-F: 0.125\ntestset: URL, BLEU: 55.9, chr-F: 0.672\ntestset: URL, BLEU: 0.3, chr-F: 0.083\ntestset: URL, BLEU: 7.2, chr-F: 0.383\ntestset: URL, BLEU: 0.0, chr-F: 0.102\ntestset: URL, BLEU: 1.9, chr-F: 0.135",
"### System Info:\n\n\n* hf\\_name: ine-ine\n* source\\_languages: ine\n* target\\_languages: ine\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'en', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine']\n* src\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* tgt\\_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur\\_Latn', 'arg', 'pes\\_Thaa', 'sqi', 'csb\\_Latn', 'fra', 'hat', 'non\\_Latn', 'sco', 'pnb', 'roh', 'bul\\_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw\\_Latn', 'hsb', 'tly\\_Latn', 'bul', 'bel', 'got\\_Goth', 'lat\\_Grek', 'ext', 'gla', 'mai', 'sin', 'hif\\_Latn', 'eng', 'bre', 'nob\\_Hebr', 'prg\\_Latn', 'ang\\_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr\\_Arab', 'san\\_Deva', 'gos', 'rus', 'fao', 'orv\\_Cyrl', 'bel\\_Latn', 'cos', 'zza', 'grc\\_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk\\_Cyrl', 'hye\\_Latn', 'pdc', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp\\_Latn', 'zlm\\_Latn', 'ind', 'rom', 'hye', 'scn', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus\\_Latn', 'jdt\\_Cyrl', 'gsw', 'glv', 'nld', 'snd\\_Arab', 'kur\\_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm\\_Latn', 'ksh', 'pan\\_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld\\_Latn', 'ces', 'egl', 'vec', 'max\\_Latn', 'pes\\_Latn', 'ltg', 'nds'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ine\n* tgt\\_alpha3: ine\n* short\\_pair: ine-ine\n* chrF2\\_score: 0.509\n* bleu: 30.8\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 69953.0\n* src\\_name: Indo-European languages\n* tgt\\_name: Indo-European languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: ine\n* tgt\\_alpha2: ine\n* prefer\\_old: False\n* long\\_pair: ine-ine\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
178,
43310,
2289
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #os #ro #fy #cy #sc #is #yi #lb #an #sq #fr #ht #rm #ps #af #uk #sl #lt #bg #be #gd #si #en #br #mk #or #mr #ru #fo #co #oc #pl #gl #nb #bn #id #hy #da #gv #nl #pt #hi #as #kw #ga #sv #gu #wa #lv #el #it #hr #ur #nn #de #cs #ine #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
translation | transformers |
### isl-deu
* source group: Icelandic
* target group: German
* OPUS readme: [isl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-deu/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.deu | 49.2 | 0.661 |
### System Info:
- hf_name: isl-deu
- source_languages: isl
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'de']
- src_constituents: {'isl'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-deu/opus-2020-06-17.test.txt
- src_alpha3: isl
- tgt_alpha3: deu
- short_pair: is-de
- chrF2_score: 0.6609999999999999
- bleu: 49.2
- brevity_penalty: 0.998
- ref_len: 6265.0
- src_name: Icelandic
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: is
- tgt_alpha2: de
- prefer_old: False
- long_pair: isl-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["is", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-is-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"is",
"de"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #is #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### isl-deu
* source group: Icelandic
* target group: German
* OPUS readme: isl-deu
* model: transformer-align
* source language(s): isl
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.2, chr-F: 0.661
### System Info:
* hf\_name: isl-deu
* source\_languages: isl
* target\_languages: deu
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['is', 'de']
* src\_constituents: {'isl'}
* tgt\_constituents: {'deu'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: isl
* tgt\_alpha3: deu
* short\_pair: is-de
* chrF2\_score: 0.6609999999999999
* bleu: 49.2
* brevity\_penalty: 0.998
* ref\_len: 6265.0
* src\_name: Icelandic
* tgt\_name: German
* train\_date: 2020-06-17
* src\_alpha2: is
* tgt\_alpha2: de
* prefer\_old: False
* long\_pair: isl-deu
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### isl-deu\n\n\n* source group: Icelandic\n* target group: German\n* OPUS readme: isl-deu\n* model: transformer-align\n* source language(s): isl\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.2, chr-F: 0.661",
"### System Info:\n\n\n* hf\\_name: isl-deu\n* source\\_languages: isl\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'de']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: deu\n* short\\_pair: is-de\n* chrF2\\_score: 0.6609999999999999\n* bleu: 49.2\n* brevity\\_penalty: 0.998\n* ref\\_len: 6265.0\n* src\\_name: Icelandic\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: is\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: isl-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### isl-deu\n\n\n* source group: Icelandic\n* target group: German\n* OPUS readme: isl-deu\n* model: transformer-align\n* source language(s): isl\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.2, chr-F: 0.661",
"### System Info:\n\n\n* hf\\_name: isl-deu\n* source\\_languages: isl\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'de']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: deu\n* short\\_pair: is-de\n* chrF2\\_score: 0.6609999999999999\n* bleu: 49.2\n* brevity\\_penalty: 0.998\n* ref\\_len: 6265.0\n* src\\_name: Icelandic\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: is\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: isl-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
137,
413
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### isl-deu\n\n\n* source group: Icelandic\n* target group: German\n* OPUS readme: isl-deu\n* model: transformer-align\n* source language(s): isl\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.2, chr-F: 0.661### System Info:\n\n\n* hf\\_name: isl-deu\n* source\\_languages: isl\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'de']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: deu\n* short\\_pair: is-de\n* chrF2\\_score: 0.6609999999999999\n* bleu: 49.2\n* brevity\\_penalty: 0.998\n* ref\\_len: 6265.0\n* src\\_name: Icelandic\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: is\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: isl-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-is-en
* source languages: is
* target languages: en
* OPUS readme: [is-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.is.en | 51.4 | 0.672 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-is-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #is #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-is-en
* source languages: is
* target languages: en
* OPUS readme: is-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 51.4, chr-F: 0.672
| [
"### opus-mt-is-en\n\n\n* source languages: is\n* target languages: en\n* OPUS readme: is-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.4, chr-F: 0.672"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-is-en\n\n\n* source languages: is\n* target languages: en\n* OPUS readme: is-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.4, chr-F: 0.672"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-is-en\n\n\n* source languages: is\n* target languages: en\n* OPUS readme: is-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.4, chr-F: 0.672"
] |
translation | transformers |
### isl-epo
* source group: Icelandic
* target group: Esperanto
* OPUS readme: [isl-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-epo/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.epo | 11.8 | 0.314 |
### System Info:
- hf_name: isl-epo
- source_languages: isl
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'eo']
- src_constituents: {'isl'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-epo/opus-2020-06-16.test.txt
- src_alpha3: isl
- tgt_alpha3: epo
- short_pair: is-eo
- chrF2_score: 0.314
- bleu: 11.8
- brevity_penalty: 1.0
- ref_len: 1528.0
- src_name: Icelandic
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: is
- tgt_alpha2: eo
- prefer_old: False
- long_pair: isl-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["is", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-is-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"is",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #is #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### isl-epo
* source group: Icelandic
* target group: Esperanto
* OPUS readme: isl-epo
* model: transformer-align
* source language(s): isl
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 11.8, chr-F: 0.314
### System Info:
* hf\_name: isl-epo
* source\_languages: isl
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['is', 'eo']
* src\_constituents: {'isl'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: isl
* tgt\_alpha3: epo
* short\_pair: is-eo
* chrF2\_score: 0.314
* bleu: 11.8
* brevity\_penalty: 1.0
* ref\_len: 1528.0
* src\_name: Icelandic
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: is
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: isl-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### isl-epo\n\n\n* source group: Icelandic\n* target group: Esperanto\n* OPUS readme: isl-epo\n* model: transformer-align\n* source language(s): isl\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.314",
"### System Info:\n\n\n* hf\\_name: isl-epo\n* source\\_languages: isl\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'eo']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: epo\n* short\\_pair: is-eo\n* chrF2\\_score: 0.314\n* bleu: 11.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 1528.0\n* src\\_name: Icelandic\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: is\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: isl-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### isl-epo\n\n\n* source group: Icelandic\n* target group: Esperanto\n* OPUS readme: isl-epo\n* model: transformer-align\n* source language(s): isl\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.314",
"### System Info:\n\n\n* hf\\_name: isl-epo\n* source\\_languages: isl\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'eo']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: epo\n* short\\_pair: is-eo\n* chrF2\\_score: 0.314\n* bleu: 11.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 1528.0\n* src\\_name: Icelandic\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: is\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: isl-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
138,
404
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### isl-epo\n\n\n* source group: Icelandic\n* target group: Esperanto\n* OPUS readme: isl-epo\n* model: transformer-align\n* source language(s): isl\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.314### System Info:\n\n\n* hf\\_name: isl-epo\n* source\\_languages: isl\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'eo']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: epo\n* short\\_pair: is-eo\n* chrF2\\_score: 0.314\n* bleu: 11.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 1528.0\n* src\\_name: Icelandic\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: is\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: isl-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### isl-spa
* source group: Icelandic
* target group: Spanish
* OPUS readme: [isl-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-spa/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.spa | 51.2 | 0.665 |
### System Info:
- hf_name: isl-spa
- source_languages: isl
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'es']
- src_constituents: {'isl'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-spa/opus-2020-06-17.test.txt
- src_alpha3: isl
- tgt_alpha3: spa
- short_pair: is-es
- chrF2_score: 0.665
- bleu: 51.2
- brevity_penalty: 0.985
- ref_len: 1229.0
- src_name: Icelandic
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: is
- tgt_alpha2: es
- prefer_old: False
- long_pair: isl-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["is", "es"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-is-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"is",
"es"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #is #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### isl-spa
* source group: Icelandic
* target group: Spanish
* OPUS readme: isl-spa
* model: transformer-align
* source language(s): isl
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 51.2, chr-F: 0.665
### System Info:
* hf\_name: isl-spa
* source\_languages: isl
* target\_languages: spa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['is', 'es']
* src\_constituents: {'isl'}
* tgt\_constituents: {'spa'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: isl
* tgt\_alpha3: spa
* short\_pair: is-es
* chrF2\_score: 0.665
* bleu: 51.2
* brevity\_penalty: 0.985
* ref\_len: 1229.0
* src\_name: Icelandic
* tgt\_name: Spanish
* train\_date: 2020-06-17
* src\_alpha2: is
* tgt\_alpha2: es
* prefer\_old: False
* long\_pair: isl-spa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### isl-spa\n\n\n* source group: Icelandic\n* target group: Spanish\n* OPUS readme: isl-spa\n* model: transformer-align\n* source language(s): isl\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.2, chr-F: 0.665",
"### System Info:\n\n\n* hf\\_name: isl-spa\n* source\\_languages: isl\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'es']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: spa\n* short\\_pair: is-es\n* chrF2\\_score: 0.665\n* bleu: 51.2\n* brevity\\_penalty: 0.985\n* ref\\_len: 1229.0\n* src\\_name: Icelandic\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: is\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: isl-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### isl-spa\n\n\n* source group: Icelandic\n* target group: Spanish\n* OPUS readme: isl-spa\n* model: transformer-align\n* source language(s): isl\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.2, chr-F: 0.665",
"### System Info:\n\n\n* hf\\_name: isl-spa\n* source\\_languages: isl\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'es']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: spa\n* short\\_pair: is-es\n* chrF2\\_score: 0.665\n* bleu: 51.2\n* brevity\\_penalty: 0.985\n* ref\\_len: 1229.0\n* src\\_name: Icelandic\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: is\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: isl-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
396
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### isl-spa\n\n\n* source group: Icelandic\n* target group: Spanish\n* OPUS readme: isl-spa\n* model: transformer-align\n* source language(s): isl\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.2, chr-F: 0.665### System Info:\n\n\n* hf\\_name: isl-spa\n* source\\_languages: isl\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'es']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: spa\n* short\\_pair: is-es\n* chrF2\\_score: 0.665\n* bleu: 51.2\n* brevity\\_penalty: 0.985\n* ref\\_len: 1229.0\n* src\\_name: Icelandic\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: is\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: isl-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-is-fi
* source languages: is
* target languages: fi
* OPUS readme: [is-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.is.fi | 25.0 | 0.489 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-is-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #is #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-is-fi
* source languages: is
* target languages: fi
* OPUS readme: is-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.0, chr-F: 0.489
| [
"### opus-mt-is-fi\n\n\n* source languages: is\n* target languages: fi\n* OPUS readme: is-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.489"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-is-fi\n\n\n* source languages: is\n* target languages: fi\n* OPUS readme: is-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.489"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-is-fi\n\n\n* source languages: is\n* target languages: fi\n* OPUS readme: is-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.489"
] |
translation | transformers |
### opus-mt-is-fr
* source languages: is
* target languages: fr
* OPUS readme: [is-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.is.fr | 25.0 | 0.437 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-is-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #is #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-is-fr
* source languages: is
* target languages: fr
* OPUS readme: is-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.0, chr-F: 0.437
| [
"### opus-mt-is-fr\n\n\n* source languages: is\n* target languages: fr\n* OPUS readme: is-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.437"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-is-fr\n\n\n* source languages: is\n* target languages: fr\n* OPUS readme: is-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.437"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-is-fr\n\n\n* source languages: is\n* target languages: fr\n* OPUS readme: is-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.437"
] |
translation | transformers |
### isl-ita
* source group: Icelandic
* target group: Italian
* OPUS readme: [isl-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-ita/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.ita | 46.7 | 0.662 |
### System Info:
- hf_name: isl-ita
- source_languages: isl
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'it']
- src_constituents: {'isl'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.test.txt
- src_alpha3: isl
- tgt_alpha3: ita
- short_pair: is-it
- chrF2_score: 0.662
- bleu: 46.7
- brevity_penalty: 0.977
- ref_len: 1450.0
- src_name: Icelandic
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: is
- tgt_alpha2: it
- prefer_old: False
- long_pair: isl-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["is", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-is-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"is",
"it"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #is #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### isl-ita
* source group: Icelandic
* target group: Italian
* OPUS readme: isl-ita
* model: transformer-align
* source language(s): isl
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 46.7, chr-F: 0.662
### System Info:
* hf\_name: isl-ita
* source\_languages: isl
* target\_languages: ita
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['is', 'it']
* src\_constituents: {'isl'}
* tgt\_constituents: {'ita'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: isl
* tgt\_alpha3: ita
* short\_pair: is-it
* chrF2\_score: 0.662
* bleu: 46.7
* brevity\_penalty: 0.977
* ref\_len: 1450.0
* src\_name: Icelandic
* tgt\_name: Italian
* train\_date: 2020-06-17
* src\_alpha2: is
* tgt\_alpha2: it
* prefer\_old: False
* long\_pair: isl-ita
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### isl-ita\n\n\n* source group: Icelandic\n* target group: Italian\n* OPUS readme: isl-ita\n* model: transformer-align\n* source language(s): isl\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.7, chr-F: 0.662",
"### System Info:\n\n\n* hf\\_name: isl-ita\n* source\\_languages: isl\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'it']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: ita\n* short\\_pair: is-it\n* chrF2\\_score: 0.662\n* bleu: 46.7\n* brevity\\_penalty: 0.977\n* ref\\_len: 1450.0\n* src\\_name: Icelandic\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: is\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: isl-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### isl-ita\n\n\n* source group: Icelandic\n* target group: Italian\n* OPUS readme: isl-ita\n* model: transformer-align\n* source language(s): isl\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.7, chr-F: 0.662",
"### System Info:\n\n\n* hf\\_name: isl-ita\n* source\\_languages: isl\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'it']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: ita\n* short\\_pair: is-it\n* chrF2\\_score: 0.662\n* bleu: 46.7\n* brevity\\_penalty: 0.977\n* ref\\_len: 1450.0\n* src\\_name: Icelandic\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: is\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: isl-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
137,
401
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### isl-ita\n\n\n* source group: Icelandic\n* target group: Italian\n* OPUS readme: isl-ita\n* model: transformer-align\n* source language(s): isl\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.7, chr-F: 0.662### System Info:\n\n\n* hf\\_name: isl-ita\n* source\\_languages: isl\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['is', 'it']\n* src\\_constituents: {'isl'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: isl\n* tgt\\_alpha3: ita\n* short\\_pair: is-it\n* chrF2\\_score: 0.662\n* bleu: 46.7\n* brevity\\_penalty: 0.977\n* ref\\_len: 1450.0\n* src\\_name: Icelandic\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: is\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: isl-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-is-sv
* source languages: is
* target languages: sv
* OPUS readme: [is-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.is.sv | 30.4 | 0.495 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-is-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #is #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-is-sv
* source languages: is
* target languages: sv
* OPUS readme: is-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.4, chr-F: 0.495
| [
"### opus-mt-is-sv\n\n\n* source languages: is\n* target languages: sv\n* OPUS readme: is-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.495"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-is-sv\n\n\n* source languages: is\n* target languages: sv\n* OPUS readme: is-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.495"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #is #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-is-sv\n\n\n* source languages: is\n* target languages: sv\n* OPUS readme: is-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.495"
] |
translation | transformers |
### opus-mt-iso-en
* source languages: iso
* target languages: en
* OPUS readme: [iso-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/iso-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/iso-en/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-en/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-en/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.iso.en | 35.5 | 0.506 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-iso-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"iso",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #iso #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-iso-en
* source languages: iso
* target languages: en
* OPUS readme: iso-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.5, chr-F: 0.506
| [
"### opus-mt-iso-en\n\n\n* source languages: iso\n* target languages: en\n* OPUS readme: iso-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.5, chr-F: 0.506"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-iso-en\n\n\n* source languages: iso\n* target languages: en\n* OPUS readme: iso-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.5, chr-F: 0.506"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-iso-en\n\n\n* source languages: iso\n* target languages: en\n* OPUS readme: iso-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.5, chr-F: 0.506"
] |
translation | transformers |
### opus-mt-iso-es
* source languages: iso
* target languages: es
* OPUS readme: [iso-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/iso-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/iso-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.iso.es | 22.4 | 0.394 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-iso-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"iso",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #iso #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-iso-es
* source languages: iso
* target languages: es
* OPUS readme: iso-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.4, chr-F: 0.394
| [
"### opus-mt-iso-es\n\n\n* source languages: iso\n* target languages: es\n* OPUS readme: iso-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.394"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-iso-es\n\n\n* source languages: iso\n* target languages: es\n* OPUS readme: iso-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.394"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-iso-es\n\n\n* source languages: iso\n* target languages: es\n* OPUS readme: iso-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.394"
] |
translation | transformers |
### opus-mt-iso-fi
* source languages: iso
* target languages: fi
* OPUS readme: [iso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/iso-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/iso-fi/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-fi/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-fi/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.iso.fi | 23.0 | 0.443 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-iso-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"iso",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #iso #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-iso-fi
* source languages: iso
* target languages: fi
* OPUS readme: iso-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.0, chr-F: 0.443
| [
"### opus-mt-iso-fi\n\n\n* source languages: iso\n* target languages: fi\n* OPUS readme: iso-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.0, chr-F: 0.443"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-iso-fi\n\n\n* source languages: iso\n* target languages: fi\n* OPUS readme: iso-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.0, chr-F: 0.443"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-iso-fi\n\n\n* source languages: iso\n* target languages: fi\n* OPUS readme: iso-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.0, chr-F: 0.443"
] |
translation | transformers |
### opus-mt-iso-fr
* source languages: iso
* target languages: fr
* OPUS readme: [iso-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/iso-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/iso-fr/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-fr/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-fr/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.iso.fr | 25.6 | 0.422 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-iso-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"iso",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #iso #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-iso-fr
* source languages: iso
* target languages: fr
* OPUS readme: iso-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.6, chr-F: 0.422
| [
"### opus-mt-iso-fr\n\n\n* source languages: iso\n* target languages: fr\n* OPUS readme: iso-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.422"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-iso-fr\n\n\n* source languages: iso\n* target languages: fr\n* OPUS readme: iso-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.422"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-iso-fr\n\n\n* source languages: iso\n* target languages: fr\n* OPUS readme: iso-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.422"
] |
translation | transformers |
### opus-mt-iso-sv
* source languages: iso
* target languages: sv
* OPUS readme: [iso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/iso-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/iso-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/iso-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.iso.sv | 25.0 | 0.430 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-iso-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"iso",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #iso #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-iso-sv
* source languages: iso
* target languages: sv
* OPUS readme: iso-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.0, chr-F: 0.430
| [
"### opus-mt-iso-sv\n\n\n* source languages: iso\n* target languages: sv\n* OPUS readme: iso-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.430"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-iso-sv\n\n\n* source languages: iso\n* target languages: sv\n* OPUS readme: iso-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.430"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #iso #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-iso-sv\n\n\n* source languages: iso\n* target languages: sv\n* OPUS readme: iso-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.430"
] |
translation | transformers |
### ita-ara
* source group: Italian
* target group: Arabic
* OPUS readme: [ita-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-ara/README.md)
* model: transformer
* source language(s): ita
* target language(s): ara
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.ara | 21.9 | 0.517 |
### System Info:
- hf_name: ita-ara
- source_languages: ita
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ar']
- src_constituents: {'ita'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ara/opus-2020-07-03.test.txt
- src_alpha3: ita
- tgt_alpha3: ara
- short_pair: it-ar
- chrF2_score: 0.517
- bleu: 21.9
- brevity_penalty: 0.95
- ref_len: 1161.0
- src_name: Italian
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: it
- tgt_alpha2: ar
- prefer_old: False
- long_pair: ita-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["it", "ar"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-it-ar | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it",
"ar"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #it #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ita-ara
* source group: Italian
* target group: Arabic
* OPUS readme: ita-ara
* model: transformer
* source language(s): ita
* target language(s): ara
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.9, chr-F: 0.517
### System Info:
* hf\_name: ita-ara
* source\_languages: ita
* target\_languages: ara
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['it', 'ar']
* src\_constituents: {'ita'}
* tgt\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ita
* tgt\_alpha3: ara
* short\_pair: it-ar
* chrF2\_score: 0.517
* bleu: 21.9
* brevity\_penalty: 0.95
* ref\_len: 1161.0
* src\_name: Italian
* tgt\_name: Arabic
* train\_date: 2020-07-03
* src\_alpha2: it
* tgt\_alpha2: ar
* prefer\_old: False
* long\_pair: ita-ara
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ita-ara\n\n\n* source group: Italian\n* target group: Arabic\n* OPUS readme: ita-ara\n* model: transformer\n* source language(s): ita\n* target language(s): ara\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.9, chr-F: 0.517",
"### System Info:\n\n\n* hf\\_name: ita-ara\n* source\\_languages: ita\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ar']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: ara\n* short\\_pair: it-ar\n* chrF2\\_score: 0.517\n* bleu: 21.9\n* brevity\\_penalty: 0.95\n* ref\\_len: 1161.0\n* src\\_name: Italian\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: it\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: ita-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ita-ara\n\n\n* source group: Italian\n* target group: Arabic\n* OPUS readme: ita-ara\n* model: transformer\n* source language(s): ita\n* target language(s): ara\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.9, chr-F: 0.517",
"### System Info:\n\n\n* hf\\_name: ita-ara\n* source\\_languages: ita\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ar']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: ara\n* short\\_pair: it-ar\n* chrF2\\_score: 0.517\n* bleu: 21.9\n* brevity\\_penalty: 0.95\n* ref\\_len: 1161.0\n* src\\_name: Italian\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: it\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: ita-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
130,
443
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ita-ara\n\n\n* source group: Italian\n* target group: Arabic\n* OPUS readme: ita-ara\n* model: transformer\n* source language(s): ita\n* target language(s): ara\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.9, chr-F: 0.517### System Info:\n\n\n* hf\\_name: ita-ara\n* source\\_languages: ita\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ar']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: ara\n* short\\_pair: it-ar\n* chrF2\\_score: 0.517\n* bleu: 21.9\n* brevity\\_penalty: 0.95\n* ref\\_len: 1161.0\n* src\\_name: Italian\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: it\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: ita-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ita-bul
* source group: Italian
* target group: Bulgarian
* OPUS readme: [ita-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-bul/README.md)
* model: transformer
* source language(s): ita
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.bul | 47.9 | 0.664 |
### System Info:
- hf_name: ita-bul
- source_languages: ita
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'bg']
- src_constituents: {'ita'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-bul/opus-2020-07-03.test.txt
- src_alpha3: ita
- tgt_alpha3: bul
- short_pair: it-bg
- chrF2_score: 0.664
- bleu: 47.9
- brevity_penalty: 0.961
- ref_len: 16512.0
- src_name: Italian
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: it
- tgt_alpha2: bg
- prefer_old: False
- long_pair: ita-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["it", "bg"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-it-bg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it",
"bg"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #it #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ita-bul
* source group: Italian
* target group: Bulgarian
* OPUS readme: ita-bul
* model: transformer
* source language(s): ita
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 47.9, chr-F: 0.664
### System Info:
* hf\_name: ita-bul
* source\_languages: ita
* target\_languages: bul
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['it', 'bg']
* src\_constituents: {'ita'}
* tgt\_constituents: {'bul', 'bul\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ita
* tgt\_alpha3: bul
* short\_pair: it-bg
* chrF2\_score: 0.664
* bleu: 47.9
* brevity\_penalty: 0.961
* ref\_len: 16512.0
* src\_name: Italian
* tgt\_name: Bulgarian
* train\_date: 2020-07-03
* src\_alpha2: it
* tgt\_alpha2: bg
* prefer\_old: False
* long\_pair: ita-bul
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ita-bul\n\n\n* source group: Italian\n* target group: Bulgarian\n* OPUS readme: ita-bul\n* model: transformer\n* source language(s): ita\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.9, chr-F: 0.664",
"### System Info:\n\n\n* hf\\_name: ita-bul\n* source\\_languages: ita\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'bg']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: bul\n* short\\_pair: it-bg\n* chrF2\\_score: 0.664\n* bleu: 47.9\n* brevity\\_penalty: 0.961\n* ref\\_len: 16512.0\n* src\\_name: Italian\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: it\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: ita-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ita-bul\n\n\n* source group: Italian\n* target group: Bulgarian\n* OPUS readme: ita-bul\n* model: transformer\n* source language(s): ita\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.9, chr-F: 0.664",
"### System Info:\n\n\n* hf\\_name: ita-bul\n* source\\_languages: ita\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'bg']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: bul\n* short\\_pair: it-bg\n* chrF2\\_score: 0.664\n* bleu: 47.9\n* brevity\\_penalty: 0.961\n* ref\\_len: 16512.0\n* src\\_name: Italian\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: it\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: ita-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
133,
414
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ita-bul\n\n\n* source group: Italian\n* target group: Bulgarian\n* OPUS readme: ita-bul\n* model: transformer\n* source language(s): ita\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.9, chr-F: 0.664### System Info:\n\n\n* hf\\_name: ita-bul\n* source\\_languages: ita\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'bg']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: bul\n* short\\_pair: it-bg\n* chrF2\\_score: 0.664\n* bleu: 47.9\n* brevity\\_penalty: 0.961\n* ref\\_len: 16512.0\n* src\\_name: Italian\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: it\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: ita-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### ita-cat
* source group: Italian
* target group: Catalan
* OPUS readme: [ita-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-cat/README.md)
* model: transformer-align
* source language(s): ita
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.cat | 52.5 | 0.706 |
### System Info:
- hf_name: ita-cat
- source_languages: ita
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'ca']
- src_constituents: {'ita'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-cat/opus-2020-06-16.test.txt
- src_alpha3: ita
- tgt_alpha3: cat
- short_pair: it-ca
- chrF2_score: 0.706
- bleu: 52.5
- brevity_penalty: 0.993
- ref_len: 2074.0
- src_name: Italian
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: it
- tgt_alpha2: ca
- prefer_old: False
- long_pair: ita-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["it", "ca"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-it-ca | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it",
"ca"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #it #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ita-cat
* source group: Italian
* target group: Catalan
* OPUS readme: ita-cat
* model: transformer-align
* source language(s): ita
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 52.5, chr-F: 0.706
### System Info:
* hf\_name: ita-cat
* source\_languages: ita
* target\_languages: cat
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['it', 'ca']
* src\_constituents: {'ita'}
* tgt\_constituents: {'cat'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm12k,spm12k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ita
* tgt\_alpha3: cat
* short\_pair: it-ca
* chrF2\_score: 0.706
* bleu: 52.5
* brevity\_penalty: 0.993
* ref\_len: 2074.0
* src\_name: Italian
* tgt\_name: Catalan
* train\_date: 2020-06-16
* src\_alpha2: it
* tgt\_alpha2: ca
* prefer\_old: False
* long\_pair: ita-cat
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ita-cat\n\n\n* source group: Italian\n* target group: Catalan\n* OPUS readme: ita-cat\n* model: transformer-align\n* source language(s): ita\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.5, chr-F: 0.706",
"### System Info:\n\n\n* hf\\_name: ita-cat\n* source\\_languages: ita\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: cat\n* short\\_pair: it-ca\n* chrF2\\_score: 0.706\n* bleu: 52.5\n* brevity\\_penalty: 0.993\n* ref\\_len: 2074.0\n* src\\_name: Italian\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: it\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: ita-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ita-cat\n\n\n* source group: Italian\n* target group: Catalan\n* OPUS readme: ita-cat\n* model: transformer-align\n* source language(s): ita\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.5, chr-F: 0.706",
"### System Info:\n\n\n* hf\\_name: ita-cat\n* source\\_languages: ita\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: cat\n* short\\_pair: it-ca\n* chrF2\\_score: 0.706\n* bleu: 52.5\n* brevity\\_penalty: 0.993\n* ref\\_len: 2074.0\n* src\\_name: Italian\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: it\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: ita-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
396
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ita-cat\n\n\n* source group: Italian\n* target group: Catalan\n* OPUS readme: ita-cat\n* model: transformer-align\n* source language(s): ita\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.5, chr-F: 0.706### System Info:\n\n\n* hf\\_name: ita-cat\n* source\\_languages: ita\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: cat\n* short\\_pair: it-ca\n* chrF2\\_score: 0.706\n* bleu: 52.5\n* brevity\\_penalty: 0.993\n* ref\\_len: 2074.0\n* src\\_name: Italian\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: it\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: ita-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-it-de
* source languages: it
* target languages: de
* OPUS readme: [it-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.it.de | 49.4 | 0.678 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-it-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #it #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-it-de
* source languages: it
* target languages: de
* OPUS readme: it-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.4, chr-F: 0.678
| [
"### opus-mt-it-de\n\n\n* source languages: it\n* target languages: de\n* OPUS readme: it-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.4, chr-F: 0.678"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-it-de\n\n\n* source languages: it\n* target languages: de\n* OPUS readme: it-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.4, chr-F: 0.678"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-it-de\n\n\n* source languages: it\n* target languages: de\n* OPUS readme: it-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.4, chr-F: 0.678"
] |
translation | transformers |
### opus-mt-it-en
* source languages: it
* target languages: en
* OPUS readme: [it-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009.it.en | 35.3 | 0.600 |
| newstest2009.it.en | 34.0 | 0.594 |
| Tatoeba.it.en | 70.9 | 0.808 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-it-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #it #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-it-en
* source languages: it
* target languages: en
* OPUS readme: it-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.3, chr-F: 0.600
testset: URL, BLEU: 34.0, chr-F: 0.594
testset: URL, BLEU: 70.9, chr-F: 0.808
| [
"### opus-mt-it-en\n\n\n* source languages: it\n* target languages: en\n* OPUS readme: it-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.600\ntestset: URL, BLEU: 34.0, chr-F: 0.594\ntestset: URL, BLEU: 70.9, chr-F: 0.808"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-it-en\n\n\n* source languages: it\n* target languages: en\n* OPUS readme: it-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.600\ntestset: URL, BLEU: 34.0, chr-F: 0.594\ntestset: URL, BLEU: 70.9, chr-F: 0.808"
] | [
51,
151
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-it-en\n\n\n* source languages: it\n* target languages: en\n* OPUS readme: it-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.3, chr-F: 0.600\ntestset: URL, BLEU: 34.0, chr-F: 0.594\ntestset: URL, BLEU: 70.9, chr-F: 0.808"
] |
translation | transformers |
### ita-epo
* source group: Italian
* target group: Esperanto
* OPUS readme: [ita-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-epo/README.md)
* model: transformer-align
* source language(s): ita
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.epo | 28.2 | 0.500 |
### System Info:
- hf_name: ita-epo
- source_languages: ita
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'eo']
- src_constituents: {'ita'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-epo/opus-2020-06-16.test.txt
- src_alpha3: ita
- tgt_alpha3: epo
- short_pair: it-eo
- chrF2_score: 0.5
- bleu: 28.2
- brevity_penalty: 0.9570000000000001
- ref_len: 67846.0
- src_name: Italian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: it
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ita-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["it", "eo"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-it-eo | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"eo",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it",
"eo"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #it #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### ita-epo
* source group: Italian
* target group: Esperanto
* OPUS readme: ita-epo
* model: transformer-align
* source language(s): ita
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.2, chr-F: 0.500
### System Info:
* hf\_name: ita-epo
* source\_languages: ita
* target\_languages: epo
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['it', 'eo']
* src\_constituents: {'ita'}
* tgt\_constituents: {'epo'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ita
* tgt\_alpha3: epo
* short\_pair: it-eo
* chrF2\_score: 0.5
* bleu: 28.2
* brevity\_penalty: 0.9570000000000001
* ref\_len: 67846.0
* src\_name: Italian
* tgt\_name: Esperanto
* train\_date: 2020-06-16
* src\_alpha2: it
* tgt\_alpha2: eo
* prefer\_old: False
* long\_pair: ita-epo
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### ita-epo\n\n\n* source group: Italian\n* target group: Esperanto\n* OPUS readme: ita-epo\n* model: transformer-align\n* source language(s): ita\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.2, chr-F: 0.500",
"### System Info:\n\n\n* hf\\_name: ita-epo\n* source\\_languages: ita\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'eo']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: epo\n* short\\_pair: it-eo\n* chrF2\\_score: 0.5\n* bleu: 28.2\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 67846.0\n* src\\_name: Italian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: it\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ita-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### ita-epo\n\n\n* source group: Italian\n* target group: Esperanto\n* OPUS readme: ita-epo\n* model: transformer-align\n* source language(s): ita\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.2, chr-F: 0.500",
"### System Info:\n\n\n* hf\\_name: ita-epo\n* source\\_languages: ita\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'eo']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: epo\n* short\\_pair: it-eo\n* chrF2\\_score: 0.5\n* bleu: 28.2\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 67846.0\n* src\\_name: Italian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: it\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ita-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
138,
412
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ita-epo\n\n\n* source group: Italian\n* target group: Esperanto\n* OPUS readme: ita-epo\n* model: transformer-align\n* source language(s): ita\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.2, chr-F: 0.500### System Info:\n\n\n* hf\\_name: ita-epo\n* source\\_languages: ita\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'eo']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: epo\n* short\\_pair: it-eo\n* chrF2\\_score: 0.5\n* bleu: 28.2\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 67846.0\n* src\\_name: Italian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: it\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: ita-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-it-es
* source languages: it
* target languages: es
* OPUS readme: [it-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.zip)
* test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.test.txt)
* test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-es/opus-2020-01-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.it.es | 61.2 | 0.761 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-it-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #it #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-it-es
* source languages: it
* target languages: es
* OPUS readme: it-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 61.2, chr-F: 0.761
| [
"### opus-mt-it-es\n\n\n* source languages: it\n* target languages: es\n* OPUS readme: it-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 61.2, chr-F: 0.761"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-it-es\n\n\n* source languages: it\n* target languages: es\n* OPUS readme: it-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 61.2, chr-F: 0.761"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-it-es\n\n\n* source languages: it\n* target languages: es\n* OPUS readme: it-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 61.2, chr-F: 0.761"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.