pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
translation | transformers |
### opus-mt-war-es
* source languages: war
* target languages: es
* OPUS readme: [war-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.es | 28.7 | 0.470 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-war-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"war",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #war #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-war-es
* source languages: war
* target languages: es
* OPUS readme: war-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.7, chr-F: 0.470
| [
"### opus-mt-war-es\n\n\n* source languages: war\n* target languages: es\n* OPUS readme: war-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.7, chr-F: 0.470"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-war-es\n\n\n* source languages: war\n* target languages: es\n* OPUS readme: war-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.7, chr-F: 0.470"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-war-es\n\n\n* source languages: war\n* target languages: es\n* OPUS readme: war-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.7, chr-F: 0.470"
] |
translation | transformers |
### opus-mt-war-fi
* source languages: war
* target languages: fi
* OPUS readme: [war-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.fi | 26.9 | 0.507 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-war-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"war",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #war #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-war-fi
* source languages: war
* target languages: fi
* OPUS readme: war-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.9, chr-F: 0.507
| [
"### opus-mt-war-fi\n\n\n* source languages: war\n* target languages: fi\n* OPUS readme: war-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.507"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-war-fi\n\n\n* source languages: war\n* target languages: fi\n* OPUS readme: war-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.507"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-war-fi\n\n\n* source languages: war\n* target languages: fi\n* OPUS readme: war-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.507"
] |
translation | transformers |
### opus-mt-war-fr
* source languages: war
* target languages: fr
* OPUS readme: [war-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.fr | 30.2 | 0.482 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-war-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"war",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #war #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-war-fr
* source languages: war
* target languages: fr
* OPUS readme: war-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.2, chr-F: 0.482
| [
"### opus-mt-war-fr\n\n\n* source languages: war\n* target languages: fr\n* OPUS readme: war-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.482"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-war-fr\n\n\n* source languages: war\n* target languages: fr\n* OPUS readme: war-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.482"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-war-fr\n\n\n* source languages: war\n* target languages: fr\n* OPUS readme: war-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.482"
] |
translation | transformers |
### opus-mt-war-sv
* source languages: war
* target languages: sv
* OPUS readme: [war-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/war-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/war-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/war-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.war.sv | 31.4 | 0.505 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-war-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"war",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #war #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-war-sv
* source languages: war
* target languages: sv
* OPUS readme: war-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.4, chr-F: 0.505
| [
"### opus-mt-war-sv\n\n\n* source languages: war\n* target languages: sv\n* OPUS readme: war-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.505"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-war-sv\n\n\n* source languages: war\n* target languages: sv\n* OPUS readme: war-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.505"
] | [
51,
105
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-war-sv\n\n\n* source languages: war\n* target languages: sv\n* OPUS readme: war-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.505"
] |
translation | transformers |
### opus-mt-wls-en
* source languages: wls
* target languages: en
* OPUS readme: [wls-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.en | 31.8 | 0.471 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-wls-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wls",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #wls #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-wls-en
* source languages: wls
* target languages: en
* OPUS readme: wls-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.8, chr-F: 0.471
| [
"### opus-mt-wls-en\n\n\n* source languages: wls\n* target languages: en\n* OPUS readme: wls-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.8, chr-F: 0.471"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wls #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-wls-en\n\n\n* source languages: wls\n* target languages: en\n* OPUS readme: wls-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.8, chr-F: 0.471"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wls #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-wls-en\n\n\n* source languages: wls\n* target languages: en\n* OPUS readme: wls-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.8, chr-F: 0.471"
] |
translation | transformers |
### opus-mt-wls-fr
* source languages: wls
* target languages: fr
* OPUS readme: [wls-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.fr | 22.6 | 0.389 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-wls-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wls",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #wls #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-wls-fr
* source languages: wls
* target languages: fr
* OPUS readme: wls-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.6, chr-F: 0.389
| [
"### opus-mt-wls-fr\n\n\n* source languages: wls\n* target languages: fr\n* OPUS readme: wls-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.389"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wls #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-wls-fr\n\n\n* source languages: wls\n* target languages: fr\n* OPUS readme: wls-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.389"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wls #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-wls-fr\n\n\n* source languages: wls\n* target languages: fr\n* OPUS readme: wls-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.389"
] |
translation | transformers |
### opus-mt-wls-sv
* source languages: wls
* target languages: sv
* OPUS readme: [wls-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wls-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wls-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.wls.sv | 23.8 | 0.408 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-wls-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"wls",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #wls #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-wls-sv
* source languages: wls
* target languages: sv
* OPUS readme: wls-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.8, chr-F: 0.408
| [
"### opus-mt-wls-sv\n\n\n* source languages: wls\n* target languages: sv\n* OPUS readme: wls-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.408"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wls #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-wls-sv\n\n\n* source languages: wls\n* target languages: sv\n* OPUS readme: wls-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.408"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wls #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-wls-sv\n\n\n* source languages: wls\n* target languages: sv\n* OPUS readme: wls-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.408"
] |
translation | transformers |
### opus-mt-xh-en
* source languages: xh
* target languages: en
* OPUS readme: [xh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.en | 45.8 | 0.610 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-xh-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #xh #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-xh-en
* source languages: xh
* target languages: en
* OPUS readme: xh-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 45.8, chr-F: 0.610
| [
"### opus-mt-xh-en\n\n\n* source languages: xh\n* target languages: en\n* OPUS readme: xh-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.8, chr-F: 0.610"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #xh #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-xh-en\n\n\n* source languages: xh\n* target languages: en\n* OPUS readme: xh-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.8, chr-F: 0.610"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #xh #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-xh-en\n\n\n* source languages: xh\n* target languages: en\n* OPUS readme: xh-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.8, chr-F: 0.610"
] |
translation | transformers |
### opus-mt-xh-es
* source languages: xh
* target languages: es
* OPUS readme: [xh-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.es | 32.3 | 0.505 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-xh-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #xh #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-xh-es
* source languages: xh
* target languages: es
* OPUS readme: xh-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.3, chr-F: 0.505
| [
"### opus-mt-xh-es\n\n\n* source languages: xh\n* target languages: es\n* OPUS readme: xh-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.505"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #xh #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-xh-es\n\n\n* source languages: xh\n* target languages: es\n* OPUS readme: xh-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.505"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #xh #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-xh-es\n\n\n* source languages: xh\n* target languages: es\n* OPUS readme: xh-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.505"
] |
translation | transformers |
### opus-mt-xh-fr
* source languages: xh
* target languages: fr
* OPUS readme: [xh-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.fr | 30.6 | 0.487 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-xh-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #xh #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-xh-fr
* source languages: xh
* target languages: fr
* OPUS readme: xh-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.6, chr-F: 0.487
| [
"### opus-mt-xh-fr\n\n\n* source languages: xh\n* target languages: fr\n* OPUS readme: xh-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.6, chr-F: 0.487"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #xh #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-xh-fr\n\n\n* source languages: xh\n* target languages: fr\n* OPUS readme: xh-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.6, chr-F: 0.487"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #xh #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-xh-fr\n\n\n* source languages: xh\n* target languages: fr\n* OPUS readme: xh-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.6, chr-F: 0.487"
] |
translation | transformers |
### opus-mt-xh-sv
* source languages: xh
* target languages: sv
* OPUS readme: [xh-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/xh-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/xh-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.xh.sv | 33.1 | 0.522 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-xh-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"xh",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #xh #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-xh-sv
* source languages: xh
* target languages: sv
* OPUS readme: xh-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.1, chr-F: 0.522
| [
"### opus-mt-xh-sv\n\n\n* source languages: xh\n* target languages: sv\n* OPUS readme: xh-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.522"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #xh #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-xh-sv\n\n\n* source languages: xh\n* target languages: sv\n* OPUS readme: xh-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.522"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #xh #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-xh-sv\n\n\n* source languages: xh\n* target languages: sv\n* OPUS readme: xh-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.1, chr-F: 0.522"
] |
translation | transformers |
### opus-mt-yap-en
* source languages: yap
* target languages: en
* OPUS readme: [yap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yap.en | 30.2 | 0.452 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yap-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yap",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #yap #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-yap-en
* source languages: yap
* target languages: en
* OPUS readme: yap-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.2, chr-F: 0.452
| [
"### opus-mt-yap-en\n\n\n* source languages: yap\n* target languages: en\n* OPUS readme: yap-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.452"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yap #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-yap-en\n\n\n* source languages: yap\n* target languages: en\n* OPUS readme: yap-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.452"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yap #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-yap-en\n\n\n* source languages: yap\n* target languages: en\n* OPUS readme: yap-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.452"
] |
translation | transformers |
### opus-mt-yap-fr
* source languages: yap
* target languages: fr
* OPUS readme: [yap-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yap.fr | 22.2 | 0.381 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yap-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yap",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #yap #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-yap-fr
* source languages: yap
* target languages: fr
* OPUS readme: yap-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.2, chr-F: 0.381
| [
"### opus-mt-yap-fr\n\n\n* source languages: yap\n* target languages: fr\n* OPUS readme: yap-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.2, chr-F: 0.381"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yap #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-yap-fr\n\n\n* source languages: yap\n* target languages: fr\n* OPUS readme: yap-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.2, chr-F: 0.381"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yap #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-yap-fr\n\n\n* source languages: yap\n* target languages: fr\n* OPUS readme: yap-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.2, chr-F: 0.381"
] |
translation | transformers |
### opus-mt-yap-sv
* source languages: yap
* target languages: sv
* OPUS readme: [yap-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yap-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yap-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yap-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yap.sv | 22.6 | 0.399 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yap-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yap",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #yap #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-yap-sv
* source languages: yap
* target languages: sv
* OPUS readme: yap-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.6, chr-F: 0.399
| [
"### opus-mt-yap-sv\n\n\n* source languages: yap\n* target languages: sv\n* OPUS readme: yap-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.399"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yap #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-yap-sv\n\n\n* source languages: yap\n* target languages: sv\n* OPUS readme: yap-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.399"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yap #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-yap-sv\n\n\n* source languages: yap\n* target languages: sv\n* OPUS readme: yap-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.399"
] |
translation | transformers |
### opus-mt-yo-en
* source languages: yo
* target languages: en
* OPUS readme: [yo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.en | 33.8 | 0.496 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #yo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-yo-en
* source languages: yo
* target languages: en
* OPUS readme: yo-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.8, chr-F: 0.496
| [
"### opus-mt-yo-en\n\n\n* source languages: yo\n* target languages: en\n* OPUS readme: yo-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.8, chr-F: 0.496"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-yo-en\n\n\n* source languages: yo\n* target languages: en\n* OPUS readme: yo-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.8, chr-F: 0.496"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-yo-en\n\n\n* source languages: yo\n* target languages: en\n* OPUS readme: yo-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.8, chr-F: 0.496"
] |
translation | transformers |
### opus-mt-yo-es
* source languages: yo
* target languages: es
* OPUS readme: [yo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.es | 22.0 | 0.393 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #yo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-yo-es
* source languages: yo
* target languages: es
* OPUS readme: yo-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.0, chr-F: 0.393
| [
"### opus-mt-yo-es\n\n\n* source languages: yo\n* target languages: es\n* OPUS readme: yo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.393"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-yo-es\n\n\n* source languages: yo\n* target languages: es\n* OPUS readme: yo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.393"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-yo-es\n\n\n* source languages: yo\n* target languages: es\n* OPUS readme: yo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.393"
] |
translation | transformers |
### opus-mt-yo-fi
* source languages: yo
* target languages: fi
* OPUS readme: [yo-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.fi | 21.5 | 0.434 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #yo #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-yo-fi
* source languages: yo
* target languages: fi
* OPUS readme: yo-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.5, chr-F: 0.434
| [
"### opus-mt-yo-fi\n\n\n* source languages: yo\n* target languages: fi\n* OPUS readme: yo-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.434"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-yo-fi\n\n\n* source languages: yo\n* target languages: fi\n* OPUS readme: yo-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.434"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-yo-fi\n\n\n* source languages: yo\n* target languages: fi\n* OPUS readme: yo-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.434"
] |
translation | transformers |
### opus-mt-yo-fr
* source languages: yo
* target languages: fr
* OPUS readme: [yo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.fr | 24.1 | 0.408 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #yo #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-yo-fr
* source languages: yo
* target languages: fr
* OPUS readme: yo-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.1, chr-F: 0.408
| [
"### opus-mt-yo-fr\n\n\n* source languages: yo\n* target languages: fr\n* OPUS readme: yo-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.408"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-yo-fr\n\n\n* source languages: yo\n* target languages: fr\n* OPUS readme: yo-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.408"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-yo-fr\n\n\n* source languages: yo\n* target languages: fr\n* OPUS readme: yo-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.408"
] |
translation | transformers |
### opus-mt-yo-sv
* source languages: yo
* target languages: sv
* OPUS readme: [yo-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.sv | 25.2 | 0.434 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-yo-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"yo",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #yo #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-yo-sv
* source languages: yo
* target languages: sv
* OPUS readme: yo-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.2, chr-F: 0.434
| [
"### opus-mt-yo-sv\n\n\n* source languages: yo\n* target languages: sv\n* OPUS readme: yo-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.434"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-yo-sv\n\n\n* source languages: yo\n* target languages: sv\n* OPUS readme: yo-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.434"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #yo #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-yo-sv\n\n\n* source languages: yo\n* target languages: sv\n* OPUS readme: yo-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.434"
] |
translation | transformers |
### opus-mt-zai-es
* source languages: zai
* target languages: es
* OPUS readme: [zai-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zai-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zai-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zai.es | 20.8 | 0.372 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zai-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zai",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zai #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-zai-es
* source languages: zai
* target languages: es
* OPUS readme: zai-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.8, chr-F: 0.372
| [
"### opus-mt-zai-es\n\n\n* source languages: zai\n* target languages: es\n* OPUS readme: zai-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.8, chr-F: 0.372"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zai #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-zai-es\n\n\n* source languages: zai\n* target languages: es\n* OPUS readme: zai-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.8, chr-F: 0.372"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zai #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-zai-es\n\n\n* source languages: zai\n* target languages: es\n* OPUS readme: zai-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.8, chr-F: 0.372"
] |
translation | transformers |
### zho-bul
* source group: Chinese
* target group: Bulgarian
* OPUS readme: [zho-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-bul/README.md)
* model: transformer
* source language(s): cmn cmn_Hans cmn_Hant zho zho_Hans zho_Hant
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cmn_Hani.bul | 29.6 | 0.497 |
| Tatoeba-test.zho.bul | 29.6 | 0.497 |
### System Info:
- hf_name: zho-bul
- source_languages: zho
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'bg']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-bul/opus-2020-07-03.test.txt
- src_alpha3: zho
- tgt_alpha3: bul
- short_pair: zh-bg
- chrF2_score: 0.49700000000000005
- bleu: 29.6
- brevity_penalty: 0.883
- ref_len: 3113.0
- src_name: Chinese
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: zh
- tgt_alpha2: bg
- prefer_old: False
- long_pair: zho-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "bg"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-bg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"bg"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-bul
* source group: Chinese
* target group: Bulgarian
* OPUS readme: zho-bul
* model: transformer
* source language(s): cmn cmn\_Hans cmn\_Hant zho zho\_Hans zho\_Hant
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: Tatoeba-test.cmn\_Hani.bul, BLEU: 29.6, chr-F: 0.497
testset: URL, BLEU: 29.6, chr-F: 0.497
### System Info:
* hf\_name: zho-bul
* source\_languages: zho
* target\_languages: bul
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'bg']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'bul', 'bul\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: bul
* short\_pair: zh-bg
* chrF2\_score: 0.49700000000000005
* bleu: 29.6
* brevity\_penalty: 0.883
* ref\_len: 3113.0
* src\_name: Chinese
* tgt\_name: Bulgarian
* train\_date: 2020-07-03
* src\_alpha2: zh
* tgt\_alpha2: bg
* prefer\_old: False
* long\_pair: zho-bul
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-bul\n\n\n* source group: Chinese\n* target group: Bulgarian\n* OPUS readme: zho-bul\n* model: transformer\n* source language(s): cmn cmn\\_Hans cmn\\_Hant zho zho\\_Hans zho\\_Hant\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: Tatoeba-test.cmn\\_Hani.bul, BLEU: 29.6, chr-F: 0.497\ntestset: URL, BLEU: 29.6, chr-F: 0.497",
"### System Info:\n\n\n* hf\\_name: zho-bul\n* source\\_languages: zho\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'bg']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: bul\n* short\\_pair: zh-bg\n* chrF2\\_score: 0.49700000000000005\n* bleu: 29.6\n* brevity\\_penalty: 0.883\n* ref\\_len: 3113.0\n* src\\_name: Chinese\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: zh\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: zho-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-bul\n\n\n* source group: Chinese\n* target group: Bulgarian\n* OPUS readme: zho-bul\n* model: transformer\n* source language(s): cmn cmn\\_Hans cmn\\_Hant zho zho\\_Hans zho\\_Hant\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: Tatoeba-test.cmn\\_Hani.bul, BLEU: 29.6, chr-F: 0.497\ntestset: URL, BLEU: 29.6, chr-F: 0.497",
"### System Info:\n\n\n* hf\\_name: zho-bul\n* source\\_languages: zho\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'bg']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: bul\n* short\\_pair: zh-bg\n* chrF2\\_score: 0.49700000000000005\n* bleu: 29.6\n* brevity\\_penalty: 0.883\n* ref\\_len: 3113.0\n* src\\_name: Chinese\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: zh\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: zho-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
53,
194,
727
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-bul\n\n\n* source group: Chinese\n* target group: Bulgarian\n* OPUS readme: zho-bul\n* model: transformer\n* source language(s): cmn cmn\\_Hans cmn\\_Hant zho zho\\_Hans zho\\_Hant\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: Tatoeba-test.cmn\\_Hani.bul, BLEU: 29.6, chr-F: 0.497\ntestset: URL, BLEU: 29.6, chr-F: 0.497### System Info:\n\n\n* hf\\_name: zho-bul\n* source\\_languages: zho\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'bg']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: bul\n* short\\_pair: zh-bg\n* chrF2\\_score: 0.49700000000000005\n* bleu: 29.6\n* brevity\\_penalty: 0.883\n* ref\\_len: 3113.0\n* src\\_name: Chinese\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: zh\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: zho-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zho-deu
* source group: Chinese
* target group: German
* OPUS readme: [zho-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-deu/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn lzh_Hani wuu_Hani yue_Hani
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.deu | 32.1 | 0.522 |
### System Info:
- hf_name: zho-deu
- source_languages: zho
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'de']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: deu
- short_pair: zh-de
- chrF2_score: 0.522
- bleu: 32.1
- brevity_penalty: 0.9540000000000001
- ref_len: 19102.0
- src_name: Chinese
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: de
- prefer_old: False
- long_pair: zho-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "de"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"de"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-deu
* source group: Chinese
* target group: German
* OPUS readme: zho-deu
* model: transformer-align
* source language(s): cmn cmn\_Bopo cmn\_Hang cmn\_Hani cmn\_Hira cmn\_Kana cmn\_Latn lzh\_Hani wuu\_Hani yue\_Hani
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.1, chr-F: 0.522
### System Info:
* hf\_name: zho-deu
* source\_languages: zho
* target\_languages: deu
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'de']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'deu'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: deu
* short\_pair: zh-de
* chrF2\_score: 0.522
* bleu: 32.1
* brevity\_penalty: 0.9540000000000001
* ref\_len: 19102.0
* src\_name: Chinese
* tgt\_name: German
* train\_date: 2020-06-17
* src\_alpha2: zh
* tgt\_alpha2: de
* prefer\_old: False
* long\_pair: zho-deu
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-deu\n\n\n* source group: Chinese\n* target group: German\n* OPUS readme: zho-deu\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn lzh\\_Hani wuu\\_Hani yue\\_Hani\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.1, chr-F: 0.522",
"### System Info:\n\n\n* hf\\_name: zho-deu\n* source\\_languages: zho\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'de']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: deu\n* short\\_pair: zh-de\n* chrF2\\_score: 0.522\n* bleu: 32.1\n* brevity\\_penalty: 0.9540000000000001\n* ref\\_len: 19102.0\n* src\\_name: Chinese\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: zho-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-deu\n\n\n* source group: Chinese\n* target group: German\n* OPUS readme: zho-deu\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn lzh\\_Hani wuu\\_Hani yue\\_Hani\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.1, chr-F: 0.522",
"### System Info:\n\n\n* hf\\_name: zho-deu\n* source\\_languages: zho\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'de']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: deu\n* short\\_pair: zh-de\n* chrF2\\_score: 0.522\n* bleu: 32.1\n* brevity\\_penalty: 0.9540000000000001\n* ref\\_len: 19102.0\n* src\\_name: Chinese\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: zho-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
190,
713
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-deu\n\n\n* source group: Chinese\n* target group: German\n* OPUS readme: zho-deu\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn lzh\\_Hani wuu\\_Hani yue\\_Hani\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.1, chr-F: 0.522### System Info:\n\n\n* hf\\_name: zho-deu\n* source\\_languages: zho\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'de']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: deu\n* short\\_pair: zh-de\n* chrF2\\_score: 0.522\n* bleu: 32.1\n* brevity\\_penalty: 0.9540000000000001\n* ref\\_len: 19102.0\n* src\\_name: Chinese\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: zho-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zho-eng
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citation Information](#citation-information)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
- **Model Description:**
- **Developed by:** Language Technology Research Group at the University of Helsinki
- **Model Type:** Translation
- **Language(s):**
- Source Language: Chinese
- Target Language: English
- **License:** CC-BY-4.0
- **Resources for more information:**
- [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
## Uses
#### Direct Use
This model can be used for translation and text-to-text generation.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
Further details about the dataset for this model can be found in the OPUS readme: [zho-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md)
## Training
#### System Information
* helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port_machine: brutasse
* port_time: 2020-08-21-14:41
* src_multilingual: False
* tgt_multilingual: False
#### Training Data
##### Preprocessing
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* ref_len: 82826.0
* dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt)
## Evaluation
#### Results
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.eval.txt)
* brevity_penalty: 0.948
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.eng | 36.1 | 0.548 |
## Citation Information
```bibtex
@InProceedings{TiedemannThottingal:EAMT2020,
author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld},
booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
year = {2020},
address = {Lisbon, Portugal}
}
```
## How to Get Started With the Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en")
```
| {"language": ["zh", "en"], "license": "cc-by-4.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-en | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"zh",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"en"
] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #zh #en #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-eng
Table of Contents
-----------------
* Model Details
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Citation Information
* How to Get Started With the Model
Model Details
-------------
* Model Description:
* Developed by: Language Technology Research Group at the University of Helsinki
* Model Type: Translation
* Language(s):
+ Source Language: Chinese
+ Target Language: English
* License: CC-BY-4.0
* Resources for more information:
+ GitHub Repo
Uses
----
#### Direct Use
This model can be used for translation and text-to-text generation.
Risks, Limitations and Biases
-----------------------------
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
Further details about the dataset for this model can be found in the OPUS readme: zho-eng
Training
--------
#### System Information
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
* src\_multilingual: False
* tgt\_multilingual: False
#### Training Data
##### Preprocessing
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* ref\_len: 82826.0
* dataset: opus
* download original weights: URL
* test set translations: URL
Evaluation
----------
#### Results
* test set scores: URL
* brevity\_penalty: 0.948
Benchmarks
----------
testset: URL, BLEU: 36.1, chr-F: 0.548
How to Get Started With the Model
---------------------------------
| [
"### zho-eng\n\n\nTable of Contents\n-----------------\n\n\n* Model Details\n* Uses\n* Risks, Limitations and Biases\n* Training\n* Evaluation\n* Citation Information\n* How to Get Started With the Model\n\n\nModel Details\n-------------\n\n\n* Model Description:\n* Developed by: Language Technology Research Group at the University of Helsinki\n* Model Type: Translation\n* Language(s):\n\t+ Source Language: Chinese\n\t+ Target Language: English\n* License: CC-BY-4.0\n* Resources for more information:\n\t+ GitHub Repo\n\n\nUses\n----",
"#### Direct Use\n\n\nThis model can be used for translation and text-to-text generation.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nFurther details about the dataset for this model can be found in the OPUS readme: zho-eng\n\n\nTraining\n--------",
"#### System Information\n\n\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41\n* src\\_multilingual: False\n* tgt\\_multilingual: False",
"#### Training Data",
"##### Preprocessing\n\n\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* ref\\_len: 82826.0\n* dataset: opus\n* download original weights: URL\n* test set translations: URL\n\n\nEvaluation\n----------",
"#### Results\n\n\n* test set scores: URL\n* brevity\\_penalty: 0.948\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.548\n\n\nHow to Get Started With the Model\n---------------------------------"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #zh #en #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-eng\n\n\nTable of Contents\n-----------------\n\n\n* Model Details\n* Uses\n* Risks, Limitations and Biases\n* Training\n* Evaluation\n* Citation Information\n* How to Get Started With the Model\n\n\nModel Details\n-------------\n\n\n* Model Description:\n* Developed by: Language Technology Research Group at the University of Helsinki\n* Model Type: Translation\n* Language(s):\n\t+ Source Language: Chinese\n\t+ Target Language: English\n* License: CC-BY-4.0\n* Resources for more information:\n\t+ GitHub Repo\n\n\nUses\n----",
"#### Direct Use\n\n\nThis model can be used for translation and text-to-text generation.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nFurther details about the dataset for this model can be found in the OPUS readme: zho-eng\n\n\nTraining\n--------",
"#### System Information\n\n\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41\n* src\\_multilingual: False\n* tgt\\_multilingual: False",
"#### Training Data",
"##### Preprocessing\n\n\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* ref\\_len: 82826.0\n* dataset: opus\n* download original weights: URL\n* test set translations: URL\n\n\nEvaluation\n----------",
"#### Results\n\n\n* test set scores: URL\n* brevity\\_penalty: 0.948\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.548\n\n\nHow to Get Started With the Model\n---------------------------------"
] | [
56,
134,
150,
126,
6,
71,
99
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #zh #en #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-eng\n\n\nTable of Contents\n-----------------\n\n\n* Model Details\n* Uses\n* Risks, Limitations and Biases\n* Training\n* Evaluation\n* Citation Information\n* How to Get Started With the Model\n\n\nModel Details\n-------------\n\n\n* Model Description:\n* Developed by: Language Technology Research Group at the University of Helsinki\n* Model Type: Translation\n* Language(s):\n\t+ Source Language: Chinese\n\t+ Target Language: English\n* License: CC-BY-4.0\n* Resources for more information:\n\t+ GitHub Repo\n\n\nUses\n----#### Direct Use\n\n\nThis model can be used for translation and text-to-text generation.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\n\n\nFurther details about the dataset for this model can be found in the OPUS readme: zho-eng\n\n\nTraining\n--------#### System Information\n\n\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41\n* src\\_multilingual: False\n* tgt\\_multilingual: False#### Training Data##### Preprocessing\n\n\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* ref\\_len: 82826.0\n* dataset: opus\n* download original weights: URL\n* test set translations: URL\n\n\nEvaluation\n----------#### Results\n\n\n* test set scores: URL\n* brevity\\_penalty: 0.948\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.548\n\n\nHow to Get Started With the Model\n---------------------------------"
] |
translation | transformers |
### zho-fin
* source group: Chinese
* target group: Finnish
* OPUS readme: [zho-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-fin/README.md)
* model: transformer-align
* source language(s): cmn_Bopo cmn_Hani cmn_Latn nan_Hani yue yue_Hani
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.fin | 35.1 | 0.579 |
### System Info:
- hf_name: zho-fin
- source_languages: zho
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'fi']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-fin/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: fin
- short_pair: zh-fi
- chrF2_score: 0.579
- bleu: 35.1
- brevity_penalty: 0.935
- ref_len: 1847.0
- src_name: Chinese
- tgt_name: Finnish
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: fi
- prefer_old: False
- long_pair: zho-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "fi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"fi"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-fin
* source group: Chinese
* target group: Finnish
* OPUS readme: zho-fin
* model: transformer-align
* source language(s): cmn\_Bopo cmn\_Hani cmn\_Latn nan\_Hani yue yue\_Hani
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 35.1, chr-F: 0.579
### System Info:
* hf\_name: zho-fin
* source\_languages: zho
* target\_languages: fin
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'fi']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'fin'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: fin
* short\_pair: zh-fi
* chrF2\_score: 0.579
* bleu: 35.1
* brevity\_penalty: 0.935
* ref\_len: 1847.0
* src\_name: Chinese
* tgt\_name: Finnish
* train\_date: 2020-06-17
* src\_alpha2: zh
* tgt\_alpha2: fi
* prefer\_old: False
* long\_pair: zho-fin
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-fin\n\n\n* source group: Chinese\n* target group: Finnish\n* OPUS readme: zho-fin\n* model: transformer-align\n* source language(s): cmn\\_Bopo cmn\\_Hani cmn\\_Latn nan\\_Hani yue yue\\_Hani\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.1, chr-F: 0.579",
"### System Info:\n\n\n* hf\\_name: zho-fin\n* source\\_languages: zho\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'fi']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: fin\n* short\\_pair: zh-fi\n* chrF2\\_score: 0.579\n* bleu: 35.1\n* brevity\\_penalty: 0.935\n* ref\\_len: 1847.0\n* src\\_name: Chinese\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: zho-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-fin\n\n\n* source group: Chinese\n* target group: Finnish\n* OPUS readme: zho-fin\n* model: transformer-align\n* source language(s): cmn\\_Bopo cmn\\_Hani cmn\\_Latn nan\\_Hani yue yue\\_Hani\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.1, chr-F: 0.579",
"### System Info:\n\n\n* hf\\_name: zho-fin\n* source\\_languages: zho\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'fi']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: fin\n* short\\_pair: zh-fi\n* chrF2\\_score: 0.579\n* bleu: 35.1\n* brevity\\_penalty: 0.935\n* ref\\_len: 1847.0\n* src\\_name: Chinese\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: zho-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
162,
701
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-fin\n\n\n* source group: Chinese\n* target group: Finnish\n* OPUS readme: zho-fin\n* model: transformer-align\n* source language(s): cmn\\_Bopo cmn\\_Hani cmn\\_Latn nan\\_Hani yue yue\\_Hani\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.1, chr-F: 0.579### System Info:\n\n\n* hf\\_name: zho-fin\n* source\\_languages: zho\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'fi']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: fin\n* short\\_pair: zh-fi\n* chrF2\\_score: 0.579\n* bleu: 35.1\n* brevity\\_penalty: 0.935\n* ref\\_len: 1847.0\n* src\\_name: Chinese\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: zho-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zho-heb
* source group: Chinese
* target group: Hebrew
* OPUS readme: [zho-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-heb/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn cmn_Yiii lzh lzh_Bopo lzh_Hang lzh_Hani lzh_Hira lzh_Kana lzh_Yiii
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.heb | 28.5 | 0.469 |
### System Info:
- hf_name: zho-heb
- source_languages: zho
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'he']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-heb/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: heb
- short_pair: zh-he
- chrF2_score: 0.469
- bleu: 28.5
- brevity_penalty: 0.986
- ref_len: 3654.0
- src_name: Chinese
- tgt_name: Hebrew
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: he
- prefer_old: False
- long_pair: zho-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "he"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-he | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"he"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-heb
* source group: Chinese
* target group: Hebrew
* OPUS readme: zho-heb
* model: transformer-align
* source language(s): cmn cmn\_Bopo cmn\_Hang cmn\_Hani cmn\_Hira cmn\_Kana cmn\_Latn cmn\_Yiii lzh lzh\_Bopo lzh\_Hang lzh\_Hani lzh\_Hira lzh\_Kana lzh\_Yiii
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.5, chr-F: 0.469
### System Info:
* hf\_name: zho-heb
* source\_languages: zho
* target\_languages: heb
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'he']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'heb'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: heb
* short\_pair: zh-he
* chrF2\_score: 0.469
* bleu: 28.5
* brevity\_penalty: 0.986
* ref\_len: 3654.0
* src\_name: Chinese
* tgt\_name: Hebrew
* train\_date: 2020-06-17
* src\_alpha2: zh
* tgt\_alpha2: he
* prefer\_old: False
* long\_pair: zho-heb
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-heb\n\n\n* source group: Chinese\n* target group: Hebrew\n* OPUS readme: zho-heb\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn cmn\\_Yiii lzh lzh\\_Bopo lzh\\_Hang lzh\\_Hani lzh\\_Hira lzh\\_Kana lzh\\_Yiii\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.5, chr-F: 0.469",
"### System Info:\n\n\n* hf\\_name: zho-heb\n* source\\_languages: zho\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'he']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: heb\n* short\\_pair: zh-he\n* chrF2\\_score: 0.469\n* bleu: 28.5\n* brevity\\_penalty: 0.986\n* ref\\_len: 3654.0\n* src\\_name: Chinese\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: zho-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-heb\n\n\n* source group: Chinese\n* target group: Hebrew\n* OPUS readme: zho-heb\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn cmn\\_Yiii lzh lzh\\_Bopo lzh\\_Hang lzh\\_Hani lzh\\_Hira lzh\\_Kana lzh\\_Yiii\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.5, chr-F: 0.469",
"### System Info:\n\n\n* hf\\_name: zho-heb\n* source\\_languages: zho\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'he']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: heb\n* short\\_pair: zh-he\n* chrF2\\_score: 0.469\n* bleu: 28.5\n* brevity\\_penalty: 0.986\n* ref\\_len: 3654.0\n* src\\_name: Chinese\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: zho-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
216,
707
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-heb\n\n\n* source group: Chinese\n* target group: Hebrew\n* OPUS readme: zho-heb\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn cmn\\_Yiii lzh lzh\\_Bopo lzh\\_Hang lzh\\_Hani lzh\\_Hira lzh\\_Kana lzh\\_Yiii\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.5, chr-F: 0.469### System Info:\n\n\n* hf\\_name: zho-heb\n* source\\_languages: zho\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'he']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: heb\n* short\\_pair: zh-he\n* chrF2\\_score: 0.469\n* bleu: 28.5\n* brevity\\_penalty: 0.986\n* ref\\_len: 3654.0\n* src\\_name: Chinese\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: zho-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zho-ita
* source group: Chinese
* target group: Italian
* OPUS readme: [zho-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ita/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn lzh lzh_Hang lzh_Hani lzh_Hira lzh_Yiii wuu_Bopo wuu_Hani wuu_Latn yue_Hani
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.ita | 27.9 | 0.508 |
### System Info:
- hf_name: zho-ita
- source_languages: zho
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'it']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ita/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: ita
- short_pair: zh-it
- chrF2_score: 0.508
- bleu: 27.9
- brevity_penalty: 0.935
- ref_len: 19684.0
- src_name: Chinese
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: it
- prefer_old: False
- long_pair: zho-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"it"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-ita
* source group: Chinese
* target group: Italian
* OPUS readme: zho-ita
* model: transformer-align
* source language(s): cmn cmn\_Bopo cmn\_Hang cmn\_Hani cmn\_Hira cmn\_Kana cmn\_Latn lzh lzh\_Hang lzh\_Hani lzh\_Hira lzh\_Yiii wuu\_Bopo wuu\_Hani wuu\_Latn yue\_Hani
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.9, chr-F: 0.508
### System Info:
* hf\_name: zho-ita
* source\_languages: zho
* target\_languages: ita
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'it']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'ita'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: ita
* short\_pair: zh-it
* chrF2\_score: 0.508
* bleu: 27.9
* brevity\_penalty: 0.935
* ref\_len: 19684.0
* src\_name: Chinese
* tgt\_name: Italian
* train\_date: 2020-06-17
* src\_alpha2: zh
* tgt\_alpha2: it
* prefer\_old: False
* long\_pair: zho-ita
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-ita\n\n\n* source group: Chinese\n* target group: Italian\n* OPUS readme: zho-ita\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn lzh lzh\\_Hang lzh\\_Hani lzh\\_Hira lzh\\_Yiii wuu\\_Bopo wuu\\_Hani wuu\\_Latn yue\\_Hani\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.508",
"### System Info:\n\n\n* hf\\_name: zho-ita\n* source\\_languages: zho\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'it']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: ita\n* short\\_pair: zh-it\n* chrF2\\_score: 0.508\n* bleu: 27.9\n* brevity\\_penalty: 0.935\n* ref\\_len: 19684.0\n* src\\_name: Chinese\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: zho-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-ita\n\n\n* source group: Chinese\n* target group: Italian\n* OPUS readme: zho-ita\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn lzh lzh\\_Hang lzh\\_Hani lzh\\_Hira lzh\\_Yiii wuu\\_Bopo wuu\\_Hani wuu\\_Latn yue\\_Hani\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.508",
"### System Info:\n\n\n* hf\\_name: zho-ita\n* source\\_languages: zho\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'it']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: ita\n* short\\_pair: zh-it\n* chrF2\\_score: 0.508\n* bleu: 27.9\n* brevity\\_penalty: 0.935\n* ref\\_len: 19684.0\n* src\\_name: Chinese\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: zho-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
222,
707
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-ita\n\n\n* source group: Chinese\n* target group: Italian\n* OPUS readme: zho-ita\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn lzh lzh\\_Hang lzh\\_Hani lzh\\_Hira lzh\\_Yiii wuu\\_Bopo wuu\\_Hani wuu\\_Latn yue\\_Hani\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.508### System Info:\n\n\n* hf\\_name: zho-ita\n* source\\_languages: zho\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'it']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: ita\n* short\\_pair: zh-it\n* chrF2\\_score: 0.508\n* bleu: 27.9\n* brevity\\_penalty: 0.935\n* ref\\_len: 19684.0\n* src\\_name: Chinese\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: zho-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zho-msa
* source group: Chinese
* target group: Malay (macrolanguage)
* OPUS readme: [zho-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-msa/README.md)
* model: transformer-align
* source language(s): cmn_Bopo cmn_Hani cmn_Latn hak_Hani yue_Bopo yue_Hani
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.msa | 13.9 | 0.390 |
### System Info:
- hf_name: zho-msa
- source_languages: zho
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'ms']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-msa/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: msa
- short_pair: zh-ms
- chrF2_score: 0.39
- bleu: 13.9
- brevity_penalty: 0.9229999999999999
- ref_len: 2762.0
- src_name: Chinese
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: ms
- prefer_old: False
- long_pair: zho-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "ms"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-ms | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"ms",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"ms"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-msa
* source group: Chinese
* target group: Malay (macrolanguage)
* OPUS readme: zho-msa
* model: transformer-align
* source language(s): cmn\_Bopo cmn\_Hani cmn\_Latn hak\_Hani yue\_Bopo yue\_Hani
* target language(s): ind zsm\_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 13.9, chr-F: 0.390
### System Info:
* hf\_name: zho-msa
* source\_languages: zho
* target\_languages: msa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'ms']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'zsm\_Latn', 'ind', 'max\_Latn', 'zlm\_Latn', 'min'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: msa
* short\_pair: zh-ms
* chrF2\_score: 0.39
* bleu: 13.9
* brevity\_penalty: 0.9229999999999999
* ref\_len: 2762.0
* src\_name: Chinese
* tgt\_name: Malay (macrolanguage)
* train\_date: 2020-06-17
* src\_alpha2: zh
* tgt\_alpha2: ms
* prefer\_old: False
* long\_pair: zho-msa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-msa\n\n\n* source group: Chinese\n* target group: Malay (macrolanguage)\n* OPUS readme: zho-msa\n* model: transformer-align\n* source language(s): cmn\\_Bopo cmn\\_Hani cmn\\_Latn hak\\_Hani yue\\_Bopo yue\\_Hani\n* target language(s): ind zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 13.9, chr-F: 0.390",
"### System Info:\n\n\n* hf\\_name: zho-msa\n* source\\_languages: zho\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'ms']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: msa\n* short\\_pair: zh-ms\n* chrF2\\_score: 0.39\n* bleu: 13.9\n* brevity\\_penalty: 0.9229999999999999\n* ref\\_len: 2762.0\n* src\\_name: Chinese\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: zho-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-msa\n\n\n* source group: Chinese\n* target group: Malay (macrolanguage)\n* OPUS readme: zho-msa\n* model: transformer-align\n* source language(s): cmn\\_Bopo cmn\\_Hani cmn\\_Latn hak\\_Hani yue\\_Bopo yue\\_Hani\n* target language(s): ind zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 13.9, chr-F: 0.390",
"### System Info:\n\n\n* hf\\_name: zho-msa\n* source\\_languages: zho\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'ms']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: msa\n* short\\_pair: zh-ms\n* chrF2\\_score: 0.39\n* bleu: 13.9\n* brevity\\_penalty: 0.9229999999999999\n* ref\\_len: 2762.0\n* src\\_name: Chinese\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: zho-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
208,
756
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-msa\n\n\n* source group: Chinese\n* target group: Malay (macrolanguage)\n* OPUS readme: zho-msa\n* model: transformer-align\n* source language(s): cmn\\_Bopo cmn\\_Hani cmn\\_Latn hak\\_Hani yue\\_Bopo yue\\_Hani\n* target language(s): ind zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 13.9, chr-F: 0.390### System Info:\n\n\n* hf\\_name: zho-msa\n* source\\_languages: zho\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'ms']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: msa\n* short\\_pair: zh-ms\n* chrF2\\_score: 0.39\n* bleu: 13.9\n* brevity\\_penalty: 0.9229999999999999\n* ref\\_len: 2762.0\n* src\\_name: Chinese\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: zho-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zho-nld
* source group: Chinese
* target group: Dutch
* OPUS readme: [zho-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-nld/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hani cmn_Hira cmn_Kana cmn_Latn
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.nld | 31.5 | 0.525 |
### System Info:
- hf_name: zho-nld
- source_languages: zho
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'nl']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-nld/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: nld
- short_pair: zh-nl
- chrF2_score: 0.525
- bleu: 31.5
- brevity_penalty: 0.9309999999999999
- ref_len: 13575.0
- src_name: Chinese
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: nl
- prefer_old: False
- long_pair: zho-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "nl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-nl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"nl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-nld
* source group: Chinese
* target group: Dutch
* OPUS readme: zho-nld
* model: transformer-align
* source language(s): cmn cmn\_Bopo cmn\_Hani cmn\_Hira cmn\_Kana cmn\_Latn
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.5, chr-F: 0.525
### System Info:
* hf\_name: zho-nld
* source\_languages: zho
* target\_languages: nld
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'nl']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'nld'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: nld
* short\_pair: zh-nl
* chrF2\_score: 0.525
* bleu: 31.5
* brevity\_penalty: 0.9309999999999999
* ref\_len: 13575.0
* src\_name: Chinese
* tgt\_name: Dutch
* train\_date: 2020-06-17
* src\_alpha2: zh
* tgt\_alpha2: nl
* prefer\_old: False
* long\_pair: zho-nld
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-nld\n\n\n* source group: Chinese\n* target group: Dutch\n* OPUS readme: zho-nld\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.5, chr-F: 0.525",
"### System Info:\n\n\n* hf\\_name: zho-nld\n* source\\_languages: zho\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'nl']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: nld\n* short\\_pair: zh-nl\n* chrF2\\_score: 0.525\n* bleu: 31.5\n* brevity\\_penalty: 0.9309999999999999\n* ref\\_len: 13575.0\n* src\\_name: Chinese\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: zho-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-nld\n\n\n* source group: Chinese\n* target group: Dutch\n* OPUS readme: zho-nld\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.5, chr-F: 0.525",
"### System Info:\n\n\n* hf\\_name: zho-nld\n* source\\_languages: zho\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'nl']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: nld\n* short\\_pair: zh-nl\n* chrF2\\_score: 0.525\n* bleu: 31.5\n* brevity\\_penalty: 0.9309999999999999\n* ref\\_len: 13575.0\n* src\\_name: Chinese\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: zho-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
167,
718
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-nld\n\n\n* source group: Chinese\n* target group: Dutch\n* OPUS readme: zho-nld\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hani cmn\\_Hira cmn\\_Kana cmn\\_Latn\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.5, chr-F: 0.525### System Info:\n\n\n* hf\\_name: zho-nld\n* source\\_languages: zho\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'nl']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: nld\n* short\\_pair: zh-nl\n* chrF2\\_score: 0.525\n* bleu: 31.5\n* brevity\\_penalty: 0.9309999999999999\n* ref\\_len: 13575.0\n* src\\_name: Chinese\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: zho-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zho-swe
* source group: Chinese
* target group: Swedish
* OPUS readme: [zho-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-swe/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hani cmn_Latn
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.swe | 46.1 | 0.621 |
### System Info:
- hf_name: zho-swe
- source_languages: zho
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'sv']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-swe/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: swe
- short_pair: zh-sv
- chrF2_score: 0.621
- bleu: 46.1
- brevity_penalty: 0.956
- ref_len: 6223.0
- src_name: Chinese
- tgt_name: Swedish
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: sv
- prefer_old: False
- long_pair: zho-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "sv"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"sv"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-swe
* source group: Chinese
* target group: Swedish
* OPUS readme: zho-swe
* model: transformer-align
* source language(s): cmn cmn\_Bopo cmn\_Hani cmn\_Latn
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 46.1, chr-F: 0.621
### System Info:
* hf\_name: zho-swe
* source\_languages: zho
* target\_languages: swe
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'sv']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'swe'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: swe
* short\_pair: zh-sv
* chrF2\_score: 0.621
* bleu: 46.1
* brevity\_penalty: 0.956
* ref\_len: 6223.0
* src\_name: Chinese
* tgt\_name: Swedish
* train\_date: 2020-06-17
* src\_alpha2: zh
* tgt\_alpha2: sv
* prefer\_old: False
* long\_pair: zho-swe
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-swe\n\n\n* source group: Chinese\n* target group: Swedish\n* OPUS readme: zho-swe\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hani cmn\\_Latn\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.1, chr-F: 0.621",
"### System Info:\n\n\n* hf\\_name: zho-swe\n* source\\_languages: zho\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'sv']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: swe\n* short\\_pair: zh-sv\n* chrF2\\_score: 0.621\n* bleu: 46.1\n* brevity\\_penalty: 0.956\n* ref\\_len: 6223.0\n* src\\_name: Chinese\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: zho-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-swe\n\n\n* source group: Chinese\n* target group: Swedish\n* OPUS readme: zho-swe\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hani cmn\\_Latn\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.1, chr-F: 0.621",
"### System Info:\n\n\n* hf\\_name: zho-swe\n* source\\_languages: zho\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'sv']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: swe\n* short\\_pair: zh-sv\n* chrF2\\_score: 0.621\n* bleu: 46.1\n* brevity\\_penalty: 0.956\n* ref\\_len: 6223.0\n* src\\_name: Chinese\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: zho-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
156,
707
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-swe\n\n\n* source group: Chinese\n* target group: Swedish\n* OPUS readme: zho-swe\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hani cmn\\_Latn\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.1, chr-F: 0.621### System Info:\n\n\n* hf\\_name: zho-swe\n* source\\_languages: zho\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'sv']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: swe\n* short\\_pair: zh-sv\n* chrF2\\_score: 0.621\n* bleu: 46.1\n* brevity\\_penalty: 0.956\n* ref\\_len: 6223.0\n* src\\_name: Chinese\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: zho-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zho-ukr
* source group: Chinese
* target group: Ukrainian
* OPUS readme: [zho-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ukr/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Kana cmn_Latn cmn_Yiii yue_Bopo yue_Hang yue_Hani yue_Hira yue_Kana
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.ukr | 10.4 | 0.259 |
### System Info:
- hf_name: zho-ukr
- source_languages: zho
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'uk']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-ukr/opus-2020-06-16.test.txt
- src_alpha3: zho
- tgt_alpha3: ukr
- short_pair: zh-uk
- chrF2_score: 0.259
- bleu: 10.4
- brevity_penalty: 0.9059999999999999
- ref_len: 9193.0
- src_name: Chinese
- tgt_name: Ukrainian
- train_date: 2020-06-16
- src_alpha2: zh
- tgt_alpha2: uk
- prefer_old: False
- long_pair: zho-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "uk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"uk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-ukr
* source group: Chinese
* target group: Ukrainian
* OPUS readme: zho-ukr
* model: transformer-align
* source language(s): cmn cmn\_Bopo cmn\_Hang cmn\_Hani cmn\_Kana cmn\_Latn cmn\_Yiii yue\_Bopo yue\_Hang yue\_Hani yue\_Hira yue\_Kana
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 10.4, chr-F: 0.259
### System Info:
* hf\_name: zho-ukr
* source\_languages: zho
* target\_languages: ukr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'uk']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'ukr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: ukr
* short\_pair: zh-uk
* chrF2\_score: 0.259
* bleu: 10.4
* brevity\_penalty: 0.9059999999999999
* ref\_len: 9193.0
* src\_name: Chinese
* tgt\_name: Ukrainian
* train\_date: 2020-06-16
* src\_alpha2: zh
* tgt\_alpha2: uk
* prefer\_old: False
* long\_pair: zho-ukr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-ukr\n\n\n* source group: Chinese\n* target group: Ukrainian\n* OPUS readme: zho-ukr\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Kana cmn\\_Latn cmn\\_Yiii yue\\_Bopo yue\\_Hang yue\\_Hani yue\\_Hira yue\\_Kana\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.4, chr-F: 0.259",
"### System Info:\n\n\n* hf\\_name: zho-ukr\n* source\\_languages: zho\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'uk']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: ukr\n* short\\_pair: zh-uk\n* chrF2\\_score: 0.259\n* bleu: 10.4\n* brevity\\_penalty: 0.9059999999999999\n* ref\\_len: 9193.0\n* src\\_name: Chinese\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-16\n* src\\_alpha2: zh\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: zho-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-ukr\n\n\n* source group: Chinese\n* target group: Ukrainian\n* OPUS readme: zho-ukr\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Kana cmn\\_Latn cmn\\_Yiii yue\\_Bopo yue\\_Hang yue\\_Hani yue\\_Hira yue\\_Kana\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.4, chr-F: 0.259",
"### System Info:\n\n\n* hf\\_name: zho-ukr\n* source\\_languages: zho\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'uk']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: ukr\n* short\\_pair: zh-uk\n* chrF2\\_score: 0.259\n* bleu: 10.4\n* brevity\\_penalty: 0.9059999999999999\n* ref\\_len: 9193.0\n* src\\_name: Chinese\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-16\n* src\\_alpha2: zh\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: zho-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
196,
719
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-ukr\n\n\n* source group: Chinese\n* target group: Ukrainian\n* OPUS readme: zho-ukr\n* model: transformer-align\n* source language(s): cmn cmn\\_Bopo cmn\\_Hang cmn\\_Hani cmn\\_Kana cmn\\_Latn cmn\\_Yiii yue\\_Bopo yue\\_Hang yue\\_Hani yue\\_Hira yue\\_Kana\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.4, chr-F: 0.259### System Info:\n\n\n* hf\\_name: zho-ukr\n* source\\_languages: zho\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'uk']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: ukr\n* short\\_pair: zh-uk\n* chrF2\\_score: 0.259\n* bleu: 10.4\n* brevity\\_penalty: 0.9059999999999999\n* ref\\_len: 9193.0\n* src\\_name: Chinese\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-16\n* src\\_alpha2: zh\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: zho-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zho-vie
* source group: Chinese
* target group: Vietnamese
* OPUS readme: [zho-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-vie/README.md)
* model: transformer-align
* source language(s): cmn_Hani cmn_Latn
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.vie | 20.0 | 0.385 |
### System Info:
- hf_name: zho-vie
- source_languages: zho
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'vi']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: vie
- short_pair: zh-vi
- chrF2_score: 0.385
- bleu: 20.0
- brevity_penalty: 0.917
- ref_len: 4667.0
- src_name: Chinese
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: vi
- prefer_old: False
- long_pair: zho-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["zh", "vi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zh-vi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zh",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"vi"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zh #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zho-vie
* source group: Chinese
* target group: Vietnamese
* OPUS readme: zho-vie
* model: transformer-align
* source language(s): cmn\_Hani cmn\_Latn
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.0, chr-F: 0.385
### System Info:
* hf\_name: zho-vie
* source\_languages: zho
* target\_languages: vie
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['zh', 'vi']
* src\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* tgt\_constituents: {'vie', 'vie\_Hani'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zho
* tgt\_alpha3: vie
* short\_pair: zh-vi
* chrF2\_score: 0.385
* bleu: 20.0
* brevity\_penalty: 0.917
* ref\_len: 4667.0
* src\_name: Chinese
* tgt\_name: Vietnamese
* train\_date: 2020-06-17
* src\_alpha2: zh
* tgt\_alpha2: vi
* prefer\_old: False
* long\_pair: zho-vie
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zho-vie\n\n\n* source group: Chinese\n* target group: Vietnamese\n* OPUS readme: zho-vie\n* model: transformer-align\n* source language(s): cmn\\_Hani cmn\\_Latn\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.0, chr-F: 0.385",
"### System Info:\n\n\n* hf\\_name: zho-vie\n* source\\_languages: zho\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'vi']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: vie\n* short\\_pair: zh-vi\n* chrF2\\_score: 0.385\n* bleu: 20.0\n* brevity\\_penalty: 0.917\n* ref\\_len: 4667.0\n* src\\_name: Chinese\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: zho-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zho-vie\n\n\n* source group: Chinese\n* target group: Vietnamese\n* OPUS readme: zho-vie\n* model: transformer-align\n* source language(s): cmn\\_Hani cmn\\_Latn\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.0, chr-F: 0.385",
"### System Info:\n\n\n* hf\\_name: zho-vie\n* source\\_languages: zho\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'vi']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: vie\n* short\\_pair: zh-vi\n* chrF2\\_score: 0.385\n* bleu: 20.0\n* brevity\\_penalty: 0.917\n* ref\\_len: 4667.0\n* src\\_name: Chinese\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: zho-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
144,
710
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zh #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zho-vie\n\n\n* source group: Chinese\n* target group: Vietnamese\n* OPUS readme: zho-vie\n* model: transformer-align\n* source language(s): cmn\\_Hani cmn\\_Latn\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.0, chr-F: 0.385### System Info:\n\n\n* hf\\_name: zho-vie\n* source\\_languages: zho\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['zh', 'vi']\n* src\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zho\n* tgt\\_alpha3: vie\n* short\\_pair: zh-vi\n* chrF2\\_score: 0.385\n* bleu: 20.0\n* brevity\\_penalty: 0.917\n* ref\\_len: 4667.0\n* src\\_name: Chinese\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: zh\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: zho-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zle-eng
* source group: East Slavic languages
* target group: English
* OPUS readme: [zle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-eng/README.md)
* model: transformer
* source language(s): bel bel_Latn orv_Cyrl rue rus ukr
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-ruseng.rus.eng | 31.1 | 0.579 |
| newstest2013-ruseng.rus.eng | 24.9 | 0.522 |
| newstest2014-ruen-ruseng.rus.eng | 27.9 | 0.563 |
| newstest2015-enru-ruseng.rus.eng | 26.8 | 0.541 |
| newstest2016-enru-ruseng.rus.eng | 25.8 | 0.535 |
| newstest2017-enru-ruseng.rus.eng | 29.1 | 0.561 |
| newstest2018-enru-ruseng.rus.eng | 25.4 | 0.537 |
| newstest2019-ruen-ruseng.rus.eng | 26.8 | 0.545 |
| Tatoeba-test.bel-eng.bel.eng | 38.3 | 0.569 |
| Tatoeba-test.multi.eng | 50.1 | 0.656 |
| Tatoeba-test.orv-eng.orv.eng | 6.9 | 0.217 |
| Tatoeba-test.rue-eng.rue.eng | 15.4 | 0.345 |
| Tatoeba-test.rus-eng.rus.eng | 52.5 | 0.674 |
| Tatoeba-test.ukr-eng.ukr.eng | 52.1 | 0.673 |
### System Info:
- hf_name: zle-eng
- source_languages: zle
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'ru', 'uk', 'zle', 'en']
- src_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.test.txt
- src_alpha3: zle
- tgt_alpha3: eng
- short_pair: zle-en
- chrF2_score: 0.6559999999999999
- bleu: 50.1
- brevity_penalty: 0.97
- ref_len: 69599.0
- src_name: East Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: zle
- tgt_alpha2: en
- prefer_old: False
- long_pair: zle-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["be", "ru", "uk", "zle", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zle-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"be",
"ru",
"uk",
"zle",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"be",
"ru",
"uk",
"zle",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #be #ru #uk #zle #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zle-eng
* source group: East Slavic languages
* target group: English
* OPUS readme: zle-eng
* model: transformer
* source language(s): bel bel\_Latn orv\_Cyrl rue rus ukr
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.1, chr-F: 0.579
testset: URL, BLEU: 24.9, chr-F: 0.522
testset: URL, BLEU: 27.9, chr-F: 0.563
testset: URL, BLEU: 26.8, chr-F: 0.541
testset: URL, BLEU: 25.8, chr-F: 0.535
testset: URL, BLEU: 29.1, chr-F: 0.561
testset: URL, BLEU: 25.4, chr-F: 0.537
testset: URL, BLEU: 26.8, chr-F: 0.545
testset: URL, BLEU: 38.3, chr-F: 0.569
testset: URL, BLEU: 50.1, chr-F: 0.656
testset: URL, BLEU: 6.9, chr-F: 0.217
testset: URL, BLEU: 15.4, chr-F: 0.345
testset: URL, BLEU: 52.5, chr-F: 0.674
testset: URL, BLEU: 52.1, chr-F: 0.673
### System Info:
* hf\_name: zle-eng
* source\_languages: zle
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['be', 'ru', 'uk', 'zle', 'en']
* src\_constituents: {'bel', 'orv\_Cyrl', 'bel\_Latn', 'rus', 'ukr', 'rue'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zle
* tgt\_alpha3: eng
* short\_pair: zle-en
* chrF2\_score: 0.6559999999999999
* bleu: 50.1
* brevity\_penalty: 0.97
* ref\_len: 69599.0
* src\_name: East Slavic languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: zle
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: zle-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zle-eng\n\n\n* source group: East Slavic languages\n* target group: English\n* OPUS readme: zle-eng\n* model: transformer\n* source language(s): bel bel\\_Latn orv\\_Cyrl rue rus ukr\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.1, chr-F: 0.579\ntestset: URL, BLEU: 24.9, chr-F: 0.522\ntestset: URL, BLEU: 27.9, chr-F: 0.563\ntestset: URL, BLEU: 26.8, chr-F: 0.541\ntestset: URL, BLEU: 25.8, chr-F: 0.535\ntestset: URL, BLEU: 29.1, chr-F: 0.561\ntestset: URL, BLEU: 25.4, chr-F: 0.537\ntestset: URL, BLEU: 26.8, chr-F: 0.545\ntestset: URL, BLEU: 38.3, chr-F: 0.569\ntestset: URL, BLEU: 50.1, chr-F: 0.656\ntestset: URL, BLEU: 6.9, chr-F: 0.217\ntestset: URL, BLEU: 15.4, chr-F: 0.345\ntestset: URL, BLEU: 52.5, chr-F: 0.674\ntestset: URL, BLEU: 52.1, chr-F: 0.673",
"### System Info:\n\n\n* hf\\_name: zle-eng\n* source\\_languages: zle\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'ru', 'uk', 'zle', 'en']\n* src\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zle\n* tgt\\_alpha3: eng\n* short\\_pair: zle-en\n* chrF2\\_score: 0.6559999999999999\n* bleu: 50.1\n* brevity\\_penalty: 0.97\n* ref\\_len: 69599.0\n* src\\_name: East Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: zle\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: zle-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #be #ru #uk #zle #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zle-eng\n\n\n* source group: East Slavic languages\n* target group: English\n* OPUS readme: zle-eng\n* model: transformer\n* source language(s): bel bel\\_Latn orv\\_Cyrl rue rus ukr\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.1, chr-F: 0.579\ntestset: URL, BLEU: 24.9, chr-F: 0.522\ntestset: URL, BLEU: 27.9, chr-F: 0.563\ntestset: URL, BLEU: 26.8, chr-F: 0.541\ntestset: URL, BLEU: 25.8, chr-F: 0.535\ntestset: URL, BLEU: 29.1, chr-F: 0.561\ntestset: URL, BLEU: 25.4, chr-F: 0.537\ntestset: URL, BLEU: 26.8, chr-F: 0.545\ntestset: URL, BLEU: 38.3, chr-F: 0.569\ntestset: URL, BLEU: 50.1, chr-F: 0.656\ntestset: URL, BLEU: 6.9, chr-F: 0.217\ntestset: URL, BLEU: 15.4, chr-F: 0.345\ntestset: URL, BLEU: 52.5, chr-F: 0.674\ntestset: URL, BLEU: 52.1, chr-F: 0.673",
"### System Info:\n\n\n* hf\\_name: zle-eng\n* source\\_languages: zle\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'ru', 'uk', 'zle', 'en']\n* src\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zle\n* tgt\\_alpha3: eng\n* short\\_pair: zle-en\n* chrF2\\_score: 0.6559999999999999\n* bleu: 50.1\n* brevity\\_penalty: 0.97\n* ref\\_len: 69599.0\n* src\\_name: East Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: zle\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: zle-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
58,
444,
455
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #be #ru #uk #zle #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zle-eng\n\n\n* source group: East Slavic languages\n* target group: English\n* OPUS readme: zle-eng\n* model: transformer\n* source language(s): bel bel\\_Latn orv\\_Cyrl rue rus ukr\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.1, chr-F: 0.579\ntestset: URL, BLEU: 24.9, chr-F: 0.522\ntestset: URL, BLEU: 27.9, chr-F: 0.563\ntestset: URL, BLEU: 26.8, chr-F: 0.541\ntestset: URL, BLEU: 25.8, chr-F: 0.535\ntestset: URL, BLEU: 29.1, chr-F: 0.561\ntestset: URL, BLEU: 25.4, chr-F: 0.537\ntestset: URL, BLEU: 26.8, chr-F: 0.545\ntestset: URL, BLEU: 38.3, chr-F: 0.569\ntestset: URL, BLEU: 50.1, chr-F: 0.656\ntestset: URL, BLEU: 6.9, chr-F: 0.217\ntestset: URL, BLEU: 15.4, chr-F: 0.345\ntestset: URL, BLEU: 52.5, chr-F: 0.674\ntestset: URL, BLEU: 52.1, chr-F: 0.673### System Info:\n\n\n* hf\\_name: zle-eng\n* source\\_languages: zle\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'ru', 'uk', 'zle', 'en']\n* src\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zle\n* tgt\\_alpha3: eng\n* short\\_pair: zle-en\n* chrF2\\_score: 0.6559999999999999\n* bleu: 50.1\n* brevity\\_penalty: 0.97\n* ref\\_len: 69599.0\n* src\\_name: East Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: zle\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: zle-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zle-zle
* source group: East Slavic languages
* target group: East Slavic languages
* OPUS readme: [zle-zle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zle/README.md)
* model: transformer
* source language(s): bel bel_Latn orv_Cyrl rus ukr
* target language(s): bel bel_Latn orv_Cyrl rus ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bel-rus.bel.rus | 57.1 | 0.758 |
| Tatoeba-test.bel-ukr.bel.ukr | 55.5 | 0.751 |
| Tatoeba-test.multi.multi | 58.0 | 0.742 |
| Tatoeba-test.orv-rus.orv.rus | 5.8 | 0.226 |
| Tatoeba-test.orv-ukr.orv.ukr | 2.5 | 0.161 |
| Tatoeba-test.rus-bel.rus.bel | 50.5 | 0.714 |
| Tatoeba-test.rus-orv.rus.orv | 0.3 | 0.129 |
| Tatoeba-test.rus-ukr.rus.ukr | 63.9 | 0.794 |
| Tatoeba-test.ukr-bel.ukr.bel | 51.3 | 0.719 |
| Tatoeba-test.ukr-orv.ukr.orv | 0.3 | 0.106 |
| Tatoeba-test.ukr-rus.ukr.rus | 68.7 | 0.825 |
### System Info:
- hf_name: zle-zle
- source_languages: zle
- target_languages: zle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-zle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'ru', 'uk', 'zle']
- src_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- tgt_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-zle/opus-2020-07-27.test.txt
- src_alpha3: zle
- tgt_alpha3: zle
- short_pair: zle-zle
- chrF2_score: 0.742
- bleu: 58.0
- brevity_penalty: 1.0
- ref_len: 62731.0
- src_name: East Slavic languages
- tgt_name: East Slavic languages
- train_date: 2020-07-27
- src_alpha2: zle
- tgt_alpha2: zle
- prefer_old: False
- long_pair: zle-zle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["be", "ru", "uk", "zle"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zle-zle | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"be",
"ru",
"uk",
"zle",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"be",
"ru",
"uk",
"zle"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #be #ru #uk #zle #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zle-zle
* source group: East Slavic languages
* target group: East Slavic languages
* OPUS readme: zle-zle
* model: transformer
* source language(s): bel bel\_Latn orv\_Cyrl rus ukr
* target language(s): bel bel\_Latn orv\_Cyrl rus ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 57.1, chr-F: 0.758
testset: URL, BLEU: 55.5, chr-F: 0.751
testset: URL, BLEU: 58.0, chr-F: 0.742
testset: URL, BLEU: 5.8, chr-F: 0.226
testset: URL, BLEU: 2.5, chr-F: 0.161
testset: URL, BLEU: 50.5, chr-F: 0.714
testset: URL, BLEU: 0.3, chr-F: 0.129
testset: URL, BLEU: 63.9, chr-F: 0.794
testset: URL, BLEU: 51.3, chr-F: 0.719
testset: URL, BLEU: 0.3, chr-F: 0.106
testset: URL, BLEU: 68.7, chr-F: 0.825
### System Info:
* hf\_name: zle-zle
* source\_languages: zle
* target\_languages: zle
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['be', 'ru', 'uk', 'zle']
* src\_constituents: {'bel', 'orv\_Cyrl', 'bel\_Latn', 'rus', 'ukr', 'rue'}
* tgt\_constituents: {'bel', 'orv\_Cyrl', 'bel\_Latn', 'rus', 'ukr', 'rue'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zle
* tgt\_alpha3: zle
* short\_pair: zle-zle
* chrF2\_score: 0.742
* bleu: 58.0
* brevity\_penalty: 1.0
* ref\_len: 62731.0
* src\_name: East Slavic languages
* tgt\_name: East Slavic languages
* train\_date: 2020-07-27
* src\_alpha2: zle
* tgt\_alpha2: zle
* prefer\_old: False
* long\_pair: zle-zle
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zle-zle\n\n\n* source group: East Slavic languages\n* target group: East Slavic languages\n* OPUS readme: zle-zle\n* model: transformer\n* source language(s): bel bel\\_Latn orv\\_Cyrl rus ukr\n* target language(s): bel bel\\_Latn orv\\_Cyrl rus ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.1, chr-F: 0.758\ntestset: URL, BLEU: 55.5, chr-F: 0.751\ntestset: URL, BLEU: 58.0, chr-F: 0.742\ntestset: URL, BLEU: 5.8, chr-F: 0.226\ntestset: URL, BLEU: 2.5, chr-F: 0.161\ntestset: URL, BLEU: 50.5, chr-F: 0.714\ntestset: URL, BLEU: 0.3, chr-F: 0.129\ntestset: URL, BLEU: 63.9, chr-F: 0.794\ntestset: URL, BLEU: 51.3, chr-F: 0.719\ntestset: URL, BLEU: 0.3, chr-F: 0.106\ntestset: URL, BLEU: 68.7, chr-F: 0.825",
"### System Info:\n\n\n* hf\\_name: zle-zle\n* source\\_languages: zle\n* target\\_languages: zle\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'ru', 'uk', 'zle']\n* src\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* tgt\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zle\n* tgt\\_alpha3: zle\n* short\\_pair: zle-zle\n* chrF2\\_score: 0.742\n* bleu: 58.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 62731.0\n* src\\_name: East Slavic languages\n* tgt\\_name: East Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: zle\n* tgt\\_alpha2: zle\n* prefer\\_old: False\n* long\\_pair: zle-zle\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #be #ru #uk #zle #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zle-zle\n\n\n* source group: East Slavic languages\n* target group: East Slavic languages\n* OPUS readme: zle-zle\n* model: transformer\n* source language(s): bel bel\\_Latn orv\\_Cyrl rus ukr\n* target language(s): bel bel\\_Latn orv\\_Cyrl rus ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.1, chr-F: 0.758\ntestset: URL, BLEU: 55.5, chr-F: 0.751\ntestset: URL, BLEU: 58.0, chr-F: 0.742\ntestset: URL, BLEU: 5.8, chr-F: 0.226\ntestset: URL, BLEU: 2.5, chr-F: 0.161\ntestset: URL, BLEU: 50.5, chr-F: 0.714\ntestset: URL, BLEU: 0.3, chr-F: 0.129\ntestset: URL, BLEU: 63.9, chr-F: 0.794\ntestset: URL, BLEU: 51.3, chr-F: 0.719\ntestset: URL, BLEU: 0.3, chr-F: 0.106\ntestset: URL, BLEU: 68.7, chr-F: 0.825",
"### System Info:\n\n\n* hf\\_name: zle-zle\n* source\\_languages: zle\n* target\\_languages: zle\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'ru', 'uk', 'zle']\n* src\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* tgt\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zle\n* tgt\\_alpha3: zle\n* short\\_pair: zle-zle\n* chrF2\\_score: 0.742\n* bleu: 58.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 62731.0\n* src\\_name: East Slavic languages\n* tgt\\_name: East Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: zle\n* tgt\\_alpha2: zle\n* prefer\\_old: False\n* long\\_pair: zle-zle\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
56,
418,
478
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #be #ru #uk #zle #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zle-zle\n\n\n* source group: East Slavic languages\n* target group: East Slavic languages\n* OPUS readme: zle-zle\n* model: transformer\n* source language(s): bel bel\\_Latn orv\\_Cyrl rus ukr\n* target language(s): bel bel\\_Latn orv\\_Cyrl rus ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.1, chr-F: 0.758\ntestset: URL, BLEU: 55.5, chr-F: 0.751\ntestset: URL, BLEU: 58.0, chr-F: 0.742\ntestset: URL, BLEU: 5.8, chr-F: 0.226\ntestset: URL, BLEU: 2.5, chr-F: 0.161\ntestset: URL, BLEU: 50.5, chr-F: 0.714\ntestset: URL, BLEU: 0.3, chr-F: 0.129\ntestset: URL, BLEU: 63.9, chr-F: 0.794\ntestset: URL, BLEU: 51.3, chr-F: 0.719\ntestset: URL, BLEU: 0.3, chr-F: 0.106\ntestset: URL, BLEU: 68.7, chr-F: 0.825### System Info:\n\n\n* hf\\_name: zle-zle\n* source\\_languages: zle\n* target\\_languages: zle\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['be', 'ru', 'uk', 'zle']\n* src\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* tgt\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zle\n* tgt\\_alpha3: zle\n* short\\_pair: zle-zle\n* chrF2\\_score: 0.742\n* bleu: 58.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 62731.0\n* src\\_name: East Slavic languages\n* tgt\\_name: East Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: zle\n* tgt\\_alpha2: zle\n* prefer\\_old: False\n* long\\_pair: zle-zle\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zls-eng
* source group: South Slavic languages
* target group: English
* OPUS readme: [zls-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-eng/README.md)
* model: transformer
* source language(s): bos_Latn bul bul_Latn hrv mkd slv srp_Cyrl srp_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul-eng.bul.eng | 54.9 | 0.693 |
| Tatoeba-test.hbs-eng.hbs.eng | 55.7 | 0.700 |
| Tatoeba-test.mkd-eng.mkd.eng | 54.6 | 0.681 |
| Tatoeba-test.multi.eng | 53.6 | 0.676 |
| Tatoeba-test.slv-eng.slv.eng | 25.6 | 0.407 |
### System Info:
- hf_name: zls-eng
- source_languages: zls
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hr', 'mk', 'bg', 'sl', 'zls', 'en']
- src_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-eng/opus2m-2020-08-01.test.txt
- src_alpha3: zls
- tgt_alpha3: eng
- short_pair: zls-en
- chrF2_score: 0.6759999999999999
- bleu: 53.6
- brevity_penalty: 0.98
- ref_len: 68623.0
- src_name: South Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: zls
- tgt_alpha2: en
- prefer_old: False
- long_pair: zls-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["hr", "mk", "bg", "sl", "zls", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zls-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hr",
"mk",
"bg",
"sl",
"zls",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hr",
"mk",
"bg",
"sl",
"zls",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hr #mk #bg #sl #zls #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zls-eng
* source group: South Slavic languages
* target group: English
* OPUS readme: zls-eng
* model: transformer
* source language(s): bos\_Latn bul bul\_Latn hrv mkd slv srp\_Cyrl srp\_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 54.9, chr-F: 0.693
testset: URL, BLEU: 55.7, chr-F: 0.700
testset: URL, BLEU: 54.6, chr-F: 0.681
testset: URL, BLEU: 53.6, chr-F: 0.676
testset: URL, BLEU: 25.6, chr-F: 0.407
### System Info:
* hf\_name: zls-eng
* source\_languages: zls
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['hr', 'mk', 'bg', 'sl', 'zls', 'en']
* src\_constituents: {'hrv', 'mkd', 'srp\_Latn', 'srp\_Cyrl', 'bul\_Latn', 'bul', 'bos\_Latn', 'slv'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zls
* tgt\_alpha3: eng
* short\_pair: zls-en
* chrF2\_score: 0.6759999999999999
* bleu: 53.6
* brevity\_penalty: 0.98
* ref\_len: 68623.0
* src\_name: South Slavic languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: zls
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: zls-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zls-eng\n\n\n* source group: South Slavic languages\n* target group: English\n* OPUS readme: zls-eng\n* model: transformer\n* source language(s): bos\\_Latn bul bul\\_Latn hrv mkd slv srp\\_Cyrl srp\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.9, chr-F: 0.693\ntestset: URL, BLEU: 55.7, chr-F: 0.700\ntestset: URL, BLEU: 54.6, chr-F: 0.681\ntestset: URL, BLEU: 53.6, chr-F: 0.676\ntestset: URL, BLEU: 25.6, chr-F: 0.407",
"### System Info:\n\n\n* hf\\_name: zls-eng\n* source\\_languages: zls\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hr', 'mk', 'bg', 'sl', 'zls', 'en']\n* src\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zls\n* tgt\\_alpha3: eng\n* short\\_pair: zls-en\n* chrF2\\_score: 0.6759999999999999\n* bleu: 53.6\n* brevity\\_penalty: 0.98\n* ref\\_len: 68623.0\n* src\\_name: South Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: zls\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: zls-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #mk #bg #sl #zls #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zls-eng\n\n\n* source group: South Slavic languages\n* target group: English\n* OPUS readme: zls-eng\n* model: transformer\n* source language(s): bos\\_Latn bul bul\\_Latn hrv mkd slv srp\\_Cyrl srp\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.9, chr-F: 0.693\ntestset: URL, BLEU: 55.7, chr-F: 0.700\ntestset: URL, BLEU: 54.6, chr-F: 0.681\ntestset: URL, BLEU: 53.6, chr-F: 0.676\ntestset: URL, BLEU: 25.6, chr-F: 0.407",
"### System Info:\n\n\n* hf\\_name: zls-eng\n* source\\_languages: zls\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hr', 'mk', 'bg', 'sl', 'zls', 'en']\n* src\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zls\n* tgt\\_alpha3: eng\n* short\\_pair: zls-en\n* chrF2\\_score: 0.6759999999999999\n* bleu: 53.6\n* brevity\\_penalty: 0.98\n* ref\\_len: 68623.0\n* src\\_name: South Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: zls\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: zls-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
61,
255,
484
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #mk #bg #sl #zls #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zls-eng\n\n\n* source group: South Slavic languages\n* target group: English\n* OPUS readme: zls-eng\n* model: transformer\n* source language(s): bos\\_Latn bul bul\\_Latn hrv mkd slv srp\\_Cyrl srp\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.9, chr-F: 0.693\ntestset: URL, BLEU: 55.7, chr-F: 0.700\ntestset: URL, BLEU: 54.6, chr-F: 0.681\ntestset: URL, BLEU: 53.6, chr-F: 0.676\ntestset: URL, BLEU: 25.6, chr-F: 0.407### System Info:\n\n\n* hf\\_name: zls-eng\n* source\\_languages: zls\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hr', 'mk', 'bg', 'sl', 'zls', 'en']\n* src\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zls\n* tgt\\_alpha3: eng\n* short\\_pair: zls-en\n* chrF2\\_score: 0.6759999999999999\n* bleu: 53.6\n* brevity\\_penalty: 0.98\n* ref\\_len: 68623.0\n* src\\_name: South Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: zls\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: zls-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zls-zls
* source group: South Slavic languages
* target group: South Slavic languages
* OPUS readme: [zls-zls](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-zls/README.md)
* model: transformer
* source language(s): bul mkd srp_Cyrl
* target language(s): bul mkd srp_Cyrl
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul-hbs.bul.hbs | 19.3 | 0.514 |
| Tatoeba-test.bul-mkd.bul.mkd | 31.9 | 0.669 |
| Tatoeba-test.hbs-bul.hbs.bul | 18.0 | 0.636 |
| Tatoeba-test.hbs-mkd.hbs.mkd | 19.4 | 0.322 |
| Tatoeba-test.mkd-bul.mkd.bul | 44.6 | 0.679 |
| Tatoeba-test.mkd-hbs.mkd.hbs | 5.5 | 0.152 |
| Tatoeba-test.multi.multi | 26.5 | 0.563 |
### System Info:
- hf_name: zls-zls
- source_languages: zls
- target_languages: zls
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zls-zls/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['hr', 'mk', 'bg', 'sl', 'zls']
- src_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- tgt_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zls-zls/opus-2020-07-27.test.txt
- src_alpha3: zls
- tgt_alpha3: zls
- short_pair: zls-zls
- chrF2_score: 0.563
- bleu: 26.5
- brevity_penalty: 1.0
- ref_len: 58.0
- src_name: South Slavic languages
- tgt_name: South Slavic languages
- train_date: 2020-07-27
- src_alpha2: zls
- tgt_alpha2: zls
- prefer_old: False
- long_pair: zls-zls
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["hr", "mk", "bg", "sl", "zls"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zls-zls | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"hr",
"mk",
"bg",
"sl",
"zls",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hr",
"mk",
"bg",
"sl",
"zls"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #hr #mk #bg #sl #zls #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zls-zls
* source group: South Slavic languages
* target group: South Slavic languages
* OPUS readme: zls-zls
* model: transformer
* source language(s): bul mkd srp\_Cyrl
* target language(s): bul mkd srp\_Cyrl
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 19.3, chr-F: 0.514
testset: URL, BLEU: 31.9, chr-F: 0.669
testset: URL, BLEU: 18.0, chr-F: 0.636
testset: URL, BLEU: 19.4, chr-F: 0.322
testset: URL, BLEU: 44.6, chr-F: 0.679
testset: URL, BLEU: 5.5, chr-F: 0.152
testset: URL, BLEU: 26.5, chr-F: 0.563
### System Info:
* hf\_name: zls-zls
* source\_languages: zls
* target\_languages: zls
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['hr', 'mk', 'bg', 'sl', 'zls']
* src\_constituents: {'hrv', 'mkd', 'srp\_Latn', 'srp\_Cyrl', 'bul\_Latn', 'bul', 'bos\_Latn', 'slv'}
* tgt\_constituents: {'hrv', 'mkd', 'srp\_Latn', 'srp\_Cyrl', 'bul\_Latn', 'bul', 'bos\_Latn', 'slv'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zls
* tgt\_alpha3: zls
* short\_pair: zls-zls
* chrF2\_score: 0.563
* bleu: 26.5
* brevity\_penalty: 1.0
* ref\_len: 58.0
* src\_name: South Slavic languages
* tgt\_name: South Slavic languages
* train\_date: 2020-07-27
* src\_alpha2: zls
* tgt\_alpha2: zls
* prefer\_old: False
* long\_pair: zls-zls
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zls-zls\n\n\n* source group: South Slavic languages\n* target group: South Slavic languages\n* OPUS readme: zls-zls\n* model: transformer\n* source language(s): bul mkd srp\\_Cyrl\n* target language(s): bul mkd srp\\_Cyrl\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.3, chr-F: 0.514\ntestset: URL, BLEU: 31.9, chr-F: 0.669\ntestset: URL, BLEU: 18.0, chr-F: 0.636\ntestset: URL, BLEU: 19.4, chr-F: 0.322\ntestset: URL, BLEU: 44.6, chr-F: 0.679\ntestset: URL, BLEU: 5.5, chr-F: 0.152\ntestset: URL, BLEU: 26.5, chr-F: 0.563",
"### System Info:\n\n\n* hf\\_name: zls-zls\n* source\\_languages: zls\n* target\\_languages: zls\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hr', 'mk', 'bg', 'sl', 'zls']\n* src\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* tgt\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zls\n* tgt\\_alpha3: zls\n* short\\_pair: zls-zls\n* chrF2\\_score: 0.563\n* bleu: 26.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 58.0\n* src\\_name: South Slavic languages\n* tgt\\_name: South Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: zls\n* tgt\\_alpha2: zls\n* prefer\\_old: False\n* long\\_pair: zls-zls\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #mk #bg #sl #zls #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zls-zls\n\n\n* source group: South Slavic languages\n* target group: South Slavic languages\n* OPUS readme: zls-zls\n* model: transformer\n* source language(s): bul mkd srp\\_Cyrl\n* target language(s): bul mkd srp\\_Cyrl\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.3, chr-F: 0.514\ntestset: URL, BLEU: 31.9, chr-F: 0.669\ntestset: URL, BLEU: 18.0, chr-F: 0.636\ntestset: URL, BLEU: 19.4, chr-F: 0.322\ntestset: URL, BLEU: 44.6, chr-F: 0.679\ntestset: URL, BLEU: 5.5, chr-F: 0.152\ntestset: URL, BLEU: 26.5, chr-F: 0.563",
"### System Info:\n\n\n* hf\\_name: zls-zls\n* source\\_languages: zls\n* target\\_languages: zls\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hr', 'mk', 'bg', 'sl', 'zls']\n* src\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* tgt\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zls\n* tgt\\_alpha3: zls\n* short\\_pair: zls-zls\n* chrF2\\_score: 0.563\n* bleu: 26.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 58.0\n* src\\_name: South Slavic languages\n* tgt\\_name: South Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: zls\n* tgt\\_alpha2: zls\n* prefer\\_old: False\n* long\\_pair: zls-zls\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
59,
316,
529
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #hr #mk #bg #sl #zls #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zls-zls\n\n\n* source group: South Slavic languages\n* target group: South Slavic languages\n* OPUS readme: zls-zls\n* model: transformer\n* source language(s): bul mkd srp\\_Cyrl\n* target language(s): bul mkd srp\\_Cyrl\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.3, chr-F: 0.514\ntestset: URL, BLEU: 31.9, chr-F: 0.669\ntestset: URL, BLEU: 18.0, chr-F: 0.636\ntestset: URL, BLEU: 19.4, chr-F: 0.322\ntestset: URL, BLEU: 44.6, chr-F: 0.679\ntestset: URL, BLEU: 5.5, chr-F: 0.152\ntestset: URL, BLEU: 26.5, chr-F: 0.563### System Info:\n\n\n* hf\\_name: zls-zls\n* source\\_languages: zls\n* target\\_languages: zls\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['hr', 'mk', 'bg', 'sl', 'zls']\n* src\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* tgt\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zls\n* tgt\\_alpha3: zls\n* short\\_pair: zls-zls\n* chrF2\\_score: 0.563\n* bleu: 26.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 58.0\n* src\\_name: South Slavic languages\n* tgt\\_name: South Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: zls\n* tgt\\_alpha2: zls\n* prefer\\_old: False\n* long\\_pair: zls-zls\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### zlw-eng
* source group: West Slavic languages
* target group: English
* OPUS readme: [zlw-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-eng/README.md)
* model: transformer
* source language(s): ces csb_Latn dsb hsb pol
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-ceseng.ces.eng | 25.7 | 0.536 |
| newstest2009-ceseng.ces.eng | 24.6 | 0.530 |
| newstest2010-ceseng.ces.eng | 25.0 | 0.540 |
| newstest2011-ceseng.ces.eng | 25.9 | 0.539 |
| newstest2012-ceseng.ces.eng | 24.8 | 0.533 |
| newstest2013-ceseng.ces.eng | 27.8 | 0.551 |
| newstest2014-csen-ceseng.ces.eng | 30.3 | 0.585 |
| newstest2015-encs-ceseng.ces.eng | 27.5 | 0.542 |
| newstest2016-encs-ceseng.ces.eng | 29.1 | 0.564 |
| newstest2017-encs-ceseng.ces.eng | 26.0 | 0.537 |
| newstest2018-encs-ceseng.ces.eng | 27.3 | 0.544 |
| Tatoeba-test.ces-eng.ces.eng | 53.3 | 0.691 |
| Tatoeba-test.csb-eng.csb.eng | 10.2 | 0.313 |
| Tatoeba-test.dsb-eng.dsb.eng | 11.7 | 0.296 |
| Tatoeba-test.hsb-eng.hsb.eng | 24.6 | 0.426 |
| Tatoeba-test.multi.eng | 51.8 | 0.680 |
| Tatoeba-test.pol-eng.pol.eng | 50.4 | 0.667 |
### System Info:
- hf_name: zlw-eng
- source_languages: zlw
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'cs', 'zlw', 'en']
- src_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-eng/opus2m-2020-08-01.test.txt
- src_alpha3: zlw
- tgt_alpha3: eng
- short_pair: zlw-en
- chrF2_score: 0.68
- bleu: 51.8
- brevity_penalty: 0.9620000000000001
- ref_len: 75742.0
- src_name: West Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: zlw
- tgt_alpha2: en
- prefer_old: False
- long_pair: zlw-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["pl", "cs", "zlw", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zlw-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"pl",
"cs",
"zlw",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pl",
"cs",
"zlw",
"en"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #pl #cs #zlw #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zlw-eng
* source group: West Slavic languages
* target group: English
* OPUS readme: zlw-eng
* model: transformer
* source language(s): ces csb\_Latn dsb hsb pol
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.7, chr-F: 0.536
testset: URL, BLEU: 24.6, chr-F: 0.530
testset: URL, BLEU: 25.0, chr-F: 0.540
testset: URL, BLEU: 25.9, chr-F: 0.539
testset: URL, BLEU: 24.8, chr-F: 0.533
testset: URL, BLEU: 27.8, chr-F: 0.551
testset: URL, BLEU: 30.3, chr-F: 0.585
testset: URL, BLEU: 27.5, chr-F: 0.542
testset: URL, BLEU: 29.1, chr-F: 0.564
testset: URL, BLEU: 26.0, chr-F: 0.537
testset: URL, BLEU: 27.3, chr-F: 0.544
testset: URL, BLEU: 53.3, chr-F: 0.691
testset: URL, BLEU: 10.2, chr-F: 0.313
testset: URL, BLEU: 11.7, chr-F: 0.296
testset: URL, BLEU: 24.6, chr-F: 0.426
testset: URL, BLEU: 51.8, chr-F: 0.680
testset: URL, BLEU: 50.4, chr-F: 0.667
### System Info:
* hf\_name: zlw-eng
* source\_languages: zlw
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['pl', 'cs', 'zlw', 'en']
* src\_constituents: {'csb\_Latn', 'dsb', 'hsb', 'pol', 'ces'}
* tgt\_constituents: {'eng'}
* src\_multilingual: True
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zlw
* tgt\_alpha3: eng
* short\_pair: zlw-en
* chrF2\_score: 0.68
* bleu: 51.8
* brevity\_penalty: 0.9620000000000001
* ref\_len: 75742.0
* src\_name: West Slavic languages
* tgt\_name: English
* train\_date: 2020-08-01
* src\_alpha2: zlw
* tgt\_alpha2: en
* prefer\_old: False
* long\_pair: zlw-eng
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zlw-eng\n\n\n* source group: West Slavic languages\n* target group: English\n* OPUS readme: zlw-eng\n* model: transformer\n* source language(s): ces csb\\_Latn dsb hsb pol\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.536\ntestset: URL, BLEU: 24.6, chr-F: 0.530\ntestset: URL, BLEU: 25.0, chr-F: 0.540\ntestset: URL, BLEU: 25.9, chr-F: 0.539\ntestset: URL, BLEU: 24.8, chr-F: 0.533\ntestset: URL, BLEU: 27.8, chr-F: 0.551\ntestset: URL, BLEU: 30.3, chr-F: 0.585\ntestset: URL, BLEU: 27.5, chr-F: 0.542\ntestset: URL, BLEU: 29.1, chr-F: 0.564\ntestset: URL, BLEU: 26.0, chr-F: 0.537\ntestset: URL, BLEU: 27.3, chr-F: 0.544\ntestset: URL, BLEU: 53.3, chr-F: 0.691\ntestset: URL, BLEU: 10.2, chr-F: 0.313\ntestset: URL, BLEU: 11.7, chr-F: 0.296\ntestset: URL, BLEU: 24.6, chr-F: 0.426\ntestset: URL, BLEU: 51.8, chr-F: 0.680\ntestset: URL, BLEU: 50.4, chr-F: 0.667",
"### System Info:\n\n\n* hf\\_name: zlw-eng\n* source\\_languages: zlw\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'cs', 'zlw', 'en']\n* src\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zlw\n* tgt\\_alpha3: eng\n* short\\_pair: zlw-en\n* chrF2\\_score: 0.68\n* bleu: 51.8\n* brevity\\_penalty: 0.9620000000000001\n* ref\\_len: 75742.0\n* src\\_name: West Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: zlw\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: zlw-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #cs #zlw #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zlw-eng\n\n\n* source group: West Slavic languages\n* target group: English\n* OPUS readme: zlw-eng\n* model: transformer\n* source language(s): ces csb\\_Latn dsb hsb pol\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.536\ntestset: URL, BLEU: 24.6, chr-F: 0.530\ntestset: URL, BLEU: 25.0, chr-F: 0.540\ntestset: URL, BLEU: 25.9, chr-F: 0.539\ntestset: URL, BLEU: 24.8, chr-F: 0.533\ntestset: URL, BLEU: 27.8, chr-F: 0.551\ntestset: URL, BLEU: 30.3, chr-F: 0.585\ntestset: URL, BLEU: 27.5, chr-F: 0.542\ntestset: URL, BLEU: 29.1, chr-F: 0.564\ntestset: URL, BLEU: 26.0, chr-F: 0.537\ntestset: URL, BLEU: 27.3, chr-F: 0.544\ntestset: URL, BLEU: 53.3, chr-F: 0.691\ntestset: URL, BLEU: 10.2, chr-F: 0.313\ntestset: URL, BLEU: 11.7, chr-F: 0.296\ntestset: URL, BLEU: 24.6, chr-F: 0.426\ntestset: URL, BLEU: 51.8, chr-F: 0.680\ntestset: URL, BLEU: 50.4, chr-F: 0.667",
"### System Info:\n\n\n* hf\\_name: zlw-eng\n* source\\_languages: zlw\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'cs', 'zlw', 'en']\n* src\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zlw\n* tgt\\_alpha3: eng\n* short\\_pair: zlw-en\n* chrF2\\_score: 0.68\n* bleu: 51.8\n* brevity\\_penalty: 0.9620000000000001\n* ref\\_len: 75742.0\n* src\\_name: West Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: zlw\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: zlw-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
57,
509,
446
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #cs #zlw #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zlw-eng\n\n\n* source group: West Slavic languages\n* target group: English\n* OPUS readme: zlw-eng\n* model: transformer\n* source language(s): ces csb\\_Latn dsb hsb pol\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.536\ntestset: URL, BLEU: 24.6, chr-F: 0.530\ntestset: URL, BLEU: 25.0, chr-F: 0.540\ntestset: URL, BLEU: 25.9, chr-F: 0.539\ntestset: URL, BLEU: 24.8, chr-F: 0.533\ntestset: URL, BLEU: 27.8, chr-F: 0.551\ntestset: URL, BLEU: 30.3, chr-F: 0.585\ntestset: URL, BLEU: 27.5, chr-F: 0.542\ntestset: URL, BLEU: 29.1, chr-F: 0.564\ntestset: URL, BLEU: 26.0, chr-F: 0.537\ntestset: URL, BLEU: 27.3, chr-F: 0.544\ntestset: URL, BLEU: 53.3, chr-F: 0.691\ntestset: URL, BLEU: 10.2, chr-F: 0.313\ntestset: URL, BLEU: 11.7, chr-F: 0.296\ntestset: URL, BLEU: 24.6, chr-F: 0.426\ntestset: URL, BLEU: 51.8, chr-F: 0.680\ntestset: URL, BLEU: 50.4, chr-F: 0.667### System Info:\n\n\n* hf\\_name: zlw-eng\n* source\\_languages: zlw\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'cs', 'zlw', 'en']\n* src\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zlw\n* tgt\\_alpha3: eng\n* short\\_pair: zlw-en\n* chrF2\\_score: 0.68\n* bleu: 51.8\n* brevity\\_penalty: 0.9620000000000001\n* ref\\_len: 75742.0\n* src\\_name: West Slavic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: zlw\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: zlw-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers | ### zlw-fiu
* source language name: West Slavic languages
* target language name: Finno-Ugrian languages
* OPUS readme: [README.md](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/README.md)
* model: transformer
* source language codes: dsb, cs, csb_Latn, hsb, pl, zlw
* target language codes: hu, vro, fi, liv_Latn, mdf, krl, fkv_Latn, mhr, et, sma, udm, vep, myv, kpv, se, izh, fiu
* dataset: opus
* release date: 2021-02-18
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2021-02-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.zip)
* a sentence-initial language token is required in the form of >>id<<(id = valid, usually three-letter target language ID)
* Training data:
* ces-fin: Tatoeba-train (1000000)
* ces-hun: Tatoeba-train (1000000)
* pol-est: Tatoeba-train (1000000)
* pol-fin: Tatoeba-train (1000000)
* pol-hun: Tatoeba-train (1000000)
* Validation data:
* ces-fin: Tatoeba-dev, 1000
* ces-hun: Tatoeba-dev, 1000
* est-pol: Tatoeba-dev, 1000
* fin-pol: Tatoeba-dev, 1000
* hun-pol: Tatoeba-dev, 1000
* mhr-pol: Tatoeba-dev, 461
* total-size-shuffled: 5426
* devset-selected: top 5000 lines of Tatoeba-dev.src.shuffled!
* Test data:
* newssyscomb2009.ces-hun: 502/9733
* newstest2009.ces-hun: 2525/54965
* Tatoeba-test.ces-fin: 88/408
* Tatoeba-test.ces-hun: 1911/10336
* Tatoeba-test.multi-multi: 4562/25497
* Tatoeba-test.pol-chm: 5/36
* Tatoeba-test.pol-est: 15/98
* Tatoeba-test.pol-fin: 609/3293
* Tatoeba-test.pol-hun: 1934/11285
* test set translations file: [test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.test.txt)
* test set scores file: [eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.eval.txt)
* BLEU-scores
|Test set|score|
|---|---|
|Tatoeba-test.ces-fin|57.2|
|Tatoeba-test.ces-hun|42.6|
|Tatoeba-test.multi-multi|39.4|
|Tatoeba-test.pol-hun|36.6|
|Tatoeba-test.pol-fin|36.1|
|Tatoeba-test.pol-est|20.9|
|newssyscomb2009.ces-hun|13.9|
|newstest2009.ces-hun|13.9|
|Tatoeba-test.pol-chm|2.0|
* chr-F-scores
|Test set|score|
|---|---|
|Tatoeba-test.ces-fin|0.71|
|Tatoeba-test.ces-hun|0.637|
|Tatoeba-test.multi-multi|0.616|
|Tatoeba-test.pol-hun|0.605|
|Tatoeba-test.pol-fin|0.592|
|newssyscomb2009.ces-hun|0.449|
|newstest2009.ces-hun|0.443|
|Tatoeba-test.pol-est|0.372|
|Tatoeba-test.pol-chm|0.007|
### System Info:
* hf_name: zlw-fiu
* source_languages: dsb,cs,csb_Latn,hsb,pl,zlw
* target_languages: hu,vro,fi,liv_Latn,mdf,krl,fkv_Latn,mhr,et,sma,udm,vep,myv,kpv,se,izh,fiu
* opus_readme_url: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/README.md
* original_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['dsb', 'cs', 'csb_Latn', 'hsb', 'pl', 'zlw', 'hu', 'vro', 'fi', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'et', 'sma', 'udm', 'vep', 'myv', 'kpv', 'se', 'izh', 'fiu']
* src_constituents: ['dsb', 'ces', 'csb_Latn', 'hsb', 'pol']
* tgt_constituents: ['hun', 'vro', 'fin', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'est', 'sma', 'udm', 'vep', 'myv', 'kpv', 'sme', 'izh']
* src_multilingual: True
* tgt_multilingual: True
* helsinki_git_sha: a0966db6db0ae616a28471ff0faf461b36fec07d
* transformers_git_sha: 3857f2b4e34912c942694489c2b667d9476e55f5
* port_machine: bungle
* port_time: 2021-06-29-15:24 | {"language": ["dsb", "cs", "csb_Latn", "hsb", "pl", "zlw", "hu", "vro", "fi", "liv_Latn", "mdf", "krl", "fkv_Latn", "mhr", "et", "sma", "udm", "vep", "myv", "kpv", "se", "izh", "fiu"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zlw-fiu | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"zlw",
"fiu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"dsb",
"cs",
"csb_Latn",
"hsb",
"pl",
"zlw",
"hu",
"vro",
"fi",
"liv_Latn",
"mdf",
"krl",
"fkv_Latn",
"mhr",
"et",
"sma",
"udm",
"vep",
"myv",
"kpv",
"se",
"izh",
"fiu"
] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #zlw #fiu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zlw-fiu
* source language name: West Slavic languages
* target language name: Finno-Ugrian languages
* OPUS readme: URL
* model: transformer
* source language codes: dsb, cs, csb_Latn, hsb, pl, zlw
* target language codes: hu, vro, fi, liv_Latn, mdf, krl, fkv_Latn, mhr, et, sma, udm, vep, myv, kpv, se, izh, fiu
* dataset: opus
* release date: 2021-02-18
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* a sentence-initial language token is required in the form of >>id<<(id = valid, usually three-letter target language ID)
* Training data:
* ces-fin: Tatoeba-train (1000000)
* ces-hun: Tatoeba-train (1000000)
* pol-est: Tatoeba-train (1000000)
* pol-fin: Tatoeba-train (1000000)
* pol-hun: Tatoeba-train (1000000)
* Validation data:
* ces-fin: Tatoeba-dev, 1000
* ces-hun: Tatoeba-dev, 1000
* est-pol: Tatoeba-dev, 1000
* fin-pol: Tatoeba-dev, 1000
* hun-pol: Tatoeba-dev, 1000
* mhr-pol: Tatoeba-dev, 461
* total-size-shuffled: 5426
* devset-selected: top 5000 lines of URL.shuffled!
* Test data:
* URL-hun: 502/9733
* URL-hun: 2525/54965
* URL-fin: 88/408
* URL-hun: 1911/10336
* URL-multi: 4562/25497
* URL-chm: 5/36
* URL-est: 15/98
* URL-fin: 609/3293
* URL-hun: 1934/11285
* test set translations file: URL
* test set scores file: URL
* BLEU-scores
|Test set|score|
|---|---|
|URL-fin|57.2|
|URL-hun|42.6|
|URL-multi|39.4|
|URL-hun|36.6|
|URL-fin|36.1|
|URL-est|20.9|
|URL-hun|13.9|
|URL-hun|13.9|
|URL-chm|2.0|
* chr-F-scores
|Test set|score|
|---|---|
|URL-fin|0.71|
|URL-hun|0.637|
|URL-multi|0.616|
|URL-hun|0.605|
|URL-fin|0.592|
|URL-hun|0.449|
|URL-hun|0.443|
|URL-est|0.372|
|URL-chm|0.007|
### System Info:
* hf_name: zlw-fiu
* source_languages: dsb,cs,csb_Latn,hsb,pl,zlw
* target_languages: hu,vro,fi,liv_Latn,mdf,krl,fkv_Latn,mhr,et,sma,udm,vep,myv,kpv,se,izh,fiu
* opus_readme_url: URL
* original_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['dsb', 'cs', 'csb_Latn', 'hsb', 'pl', 'zlw', 'hu', 'vro', 'fi', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'et', 'sma', 'udm', 'vep', 'myv', 'kpv', 'se', 'izh', 'fiu']
* src_constituents: ['dsb', 'ces', 'csb_Latn', 'hsb', 'pol']
* tgt_constituents: ['hun', 'vro', 'fin', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'est', 'sma', 'udm', 'vep', 'myv', 'kpv', 'sme', 'izh']
* src_multilingual: True
* tgt_multilingual: True
* helsinki_git_sha: a0966db6db0ae616a28471ff0faf461b36fec07d
* transformers_git_sha: 3857f2b4e34912c942694489c2b667d9476e55f5
* port_machine: bungle
* port_time: 2021-06-29-15:24 | [
"### zlw-fiu\n* source language name: West Slavic languages\n* target language name: Finno-Ugrian languages\n* OPUS readme: URL\n* model: transformer\n* source language codes: dsb, cs, csb_Latn, hsb, pl, zlw\n* target language codes: hu, vro, fi, liv_Latn, mdf, krl, fkv_Latn, mhr, et, sma, udm, vep, myv, kpv, se, izh, fiu\n* dataset: opus \n* release date: 2021-02-18\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* a sentence-initial language token is required in the form of >>id<<(id = valid, usually three-letter target language ID)\n* Training data: \n * ces-fin: Tatoeba-train (1000000)\n * ces-hun: Tatoeba-train (1000000)\n * pol-est: Tatoeba-train (1000000)\n * pol-fin: Tatoeba-train (1000000)\n * pol-hun: Tatoeba-train (1000000)\n* Validation data: \n * ces-fin: Tatoeba-dev, 1000\n * ces-hun: Tatoeba-dev, 1000\n * est-pol: Tatoeba-dev, 1000\n * fin-pol: Tatoeba-dev, 1000\n * hun-pol: Tatoeba-dev, 1000\n * mhr-pol: Tatoeba-dev, 461\n * total-size-shuffled: 5426\n * devset-selected: top 5000 lines of URL.shuffled!\n* Test data: \n * URL-hun: 502/9733\n * URL-hun: 2525/54965\n * URL-fin: 88/408\n * URL-hun: 1911/10336\n * URL-multi: 4562/25497\n * URL-chm: 5/36\n * URL-est: 15/98\n * URL-fin: 609/3293\n * URL-hun: 1934/11285\n* test set translations file: URL\n* test set scores file: URL\n* BLEU-scores\n|Test set|score|\n|---|---|\n|URL-fin|57.2|\n|URL-hun|42.6|\n|URL-multi|39.4|\n|URL-hun|36.6|\n|URL-fin|36.1|\n|URL-est|20.9|\n|URL-hun|13.9|\n|URL-hun|13.9|\n|URL-chm|2.0|\n* chr-F-scores\n|Test set|score|\n|---|---|\n|URL-fin|0.71|\n|URL-hun|0.637|\n|URL-multi|0.616|\n|URL-hun|0.605|\n|URL-fin|0.592|\n|URL-hun|0.449|\n|URL-hun|0.443|\n|URL-est|0.372|\n|URL-chm|0.007|",
"### System Info: \n* hf_name: zlw-fiu\n* source_languages: dsb,cs,csb_Latn,hsb,pl,zlw\n* target_languages: hu,vro,fi,liv_Latn,mdf,krl,fkv_Latn,mhr,et,sma,udm,vep,myv,kpv,se,izh,fiu\n* opus_readme_url: URL\n* original_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['dsb', 'cs', 'csb_Latn', 'hsb', 'pl', 'zlw', 'hu', 'vro', 'fi', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'et', 'sma', 'udm', 'vep', 'myv', 'kpv', 'se', 'izh', 'fiu']\n* src_constituents: ['dsb', 'ces', 'csb_Latn', 'hsb', 'pol']\n* tgt_constituents: ['hun', 'vro', 'fin', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'est', 'sma', 'udm', 'vep', 'myv', 'kpv', 'sme', 'izh']\n* src_multilingual: True\n* tgt_multilingual: True\n* helsinki_git_sha: a0966db6db0ae616a28471ff0faf461b36fec07d\n* transformers_git_sha: 3857f2b4e34912c942694489c2b667d9476e55f5\n* port_machine: bungle\n* port_time: 2021-06-29-15:24"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #zlw #fiu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zlw-fiu\n* source language name: West Slavic languages\n* target language name: Finno-Ugrian languages\n* OPUS readme: URL\n* model: transformer\n* source language codes: dsb, cs, csb_Latn, hsb, pl, zlw\n* target language codes: hu, vro, fi, liv_Latn, mdf, krl, fkv_Latn, mhr, et, sma, udm, vep, myv, kpv, se, izh, fiu\n* dataset: opus \n* release date: 2021-02-18\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* a sentence-initial language token is required in the form of >>id<<(id = valid, usually three-letter target language ID)\n* Training data: \n * ces-fin: Tatoeba-train (1000000)\n * ces-hun: Tatoeba-train (1000000)\n * pol-est: Tatoeba-train (1000000)\n * pol-fin: Tatoeba-train (1000000)\n * pol-hun: Tatoeba-train (1000000)\n* Validation data: \n * ces-fin: Tatoeba-dev, 1000\n * ces-hun: Tatoeba-dev, 1000\n * est-pol: Tatoeba-dev, 1000\n * fin-pol: Tatoeba-dev, 1000\n * hun-pol: Tatoeba-dev, 1000\n * mhr-pol: Tatoeba-dev, 461\n * total-size-shuffled: 5426\n * devset-selected: top 5000 lines of URL.shuffled!\n* Test data: \n * URL-hun: 502/9733\n * URL-hun: 2525/54965\n * URL-fin: 88/408\n * URL-hun: 1911/10336\n * URL-multi: 4562/25497\n * URL-chm: 5/36\n * URL-est: 15/98\n * URL-fin: 609/3293\n * URL-hun: 1934/11285\n* test set translations file: URL\n* test set scores file: URL\n* BLEU-scores\n|Test set|score|\n|---|---|\n|URL-fin|57.2|\n|URL-hun|42.6|\n|URL-multi|39.4|\n|URL-hun|36.6|\n|URL-fin|36.1|\n|URL-est|20.9|\n|URL-hun|13.9|\n|URL-hun|13.9|\n|URL-chm|2.0|\n* chr-F-scores\n|Test set|score|\n|---|---|\n|URL-fin|0.71|\n|URL-hun|0.637|\n|URL-multi|0.616|\n|URL-hun|0.605|\n|URL-fin|0.592|\n|URL-hun|0.449|\n|URL-hun|0.443|\n|URL-est|0.372|\n|URL-chm|0.007|",
"### System Info: \n* hf_name: zlw-fiu\n* source_languages: dsb,cs,csb_Latn,hsb,pl,zlw\n* target_languages: hu,vro,fi,liv_Latn,mdf,krl,fkv_Latn,mhr,et,sma,udm,vep,myv,kpv,se,izh,fiu\n* opus_readme_url: URL\n* original_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['dsb', 'cs', 'csb_Latn', 'hsb', 'pl', 'zlw', 'hu', 'vro', 'fi', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'et', 'sma', 'udm', 'vep', 'myv', 'kpv', 'se', 'izh', 'fiu']\n* src_constituents: ['dsb', 'ces', 'csb_Latn', 'hsb', 'pol']\n* tgt_constituents: ['hun', 'vro', 'fin', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'est', 'sma', 'udm', 'vep', 'myv', 'kpv', 'sme', 'izh']\n* src_multilingual: True\n* tgt_multilingual: True\n* helsinki_git_sha: a0966db6db0ae616a28471ff0faf461b36fec07d\n* transformers_git_sha: 3857f2b4e34912c942694489c2b667d9476e55f5\n* port_machine: bungle\n* port_time: 2021-06-29-15:24"
] | [
58,
759,
501
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #zlw #fiu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zlw-fiu\n* source language name: West Slavic languages\n* target language name: Finno-Ugrian languages\n* OPUS readme: URL\n* model: transformer\n* source language codes: dsb, cs, csb_Latn, hsb, pl, zlw\n* target language codes: hu, vro, fi, liv_Latn, mdf, krl, fkv_Latn, mhr, et, sma, udm, vep, myv, kpv, se, izh, fiu\n* dataset: opus \n* release date: 2021-02-18\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* a sentence-initial language token is required in the form of >>id<<(id = valid, usually three-letter target language ID)\n* Training data: \n * ces-fin: Tatoeba-train (1000000)\n * ces-hun: Tatoeba-train (1000000)\n * pol-est: Tatoeba-train (1000000)\n * pol-fin: Tatoeba-train (1000000)\n * pol-hun: Tatoeba-train (1000000)\n* Validation data: \n * ces-fin: Tatoeba-dev, 1000\n * ces-hun: Tatoeba-dev, 1000\n * est-pol: Tatoeba-dev, 1000\n * fin-pol: Tatoeba-dev, 1000\n * hun-pol: Tatoeba-dev, 1000\n * mhr-pol: Tatoeba-dev, 461\n * total-size-shuffled: 5426\n * devset-selected: top 5000 lines of URL.shuffled!\n* Test data: \n * URL-hun: 502/9733\n * URL-hun: 2525/54965\n * URL-fin: 88/408\n * URL-hun: 1911/10336\n * URL-multi: 4562/25497\n * URL-chm: 5/36\n * URL-est: 15/98\n * URL-fin: 609/3293\n * URL-hun: 1934/11285\n* test set translations file: URL\n* test set scores file: URL\n* BLEU-scores\n|Test set|score|\n|---|---|\n|URL-fin|57.2|\n|URL-hun|42.6|\n|URL-multi|39.4|\n|URL-hun|36.6|\n|URL-fin|36.1|\n|URL-est|20.9|\n|URL-hun|13.9|\n|URL-hun|13.9|\n|URL-chm|2.0|\n* chr-F-scores\n|Test set|score|\n|---|---|\n|URL-fin|0.71|\n|URL-hun|0.637|\n|URL-multi|0.616|\n|URL-hun|0.605|\n|URL-fin|0.592|\n|URL-hun|0.449|\n|URL-hun|0.443|\n|URL-est|0.372|\n|URL-chm|0.007|### System Info: \n* hf_name: zlw-fiu\n* source_languages: dsb,cs,csb_Latn,hsb,pl,zlw\n* target_languages: hu,vro,fi,liv_Latn,mdf,krl,fkv_Latn,mhr,et,sma,udm,vep,myv,kpv,se,izh,fiu\n* opus_readme_url: URL\n* original_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['dsb', 'cs', 'csb_Latn', 'hsb', 'pl', 'zlw', 'hu', 'vro', 'fi', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'et', 'sma', 'udm', 'vep', 'myv', 'kpv', 'se', 'izh', 'fiu']\n* src_constituents: ['dsb', 'ces', 'csb_Latn', 'hsb', 'pol']\n* tgt_constituents: ['hun', 'vro', 'fin', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'est', 'sma', 'udm', 'vep', 'myv', 'kpv', 'sme', 'izh']\n* src_multilingual: True\n* tgt_multilingual: True\n* helsinki_git_sha: a0966db6db0ae616a28471ff0faf461b36fec07d\n* transformers_git_sha: 3857f2b4e34912c942694489c2b667d9476e55f5\n* port_machine: bungle\n* port_time: 2021-06-29-15:24"
] |
translation | transformers |
### zlw-zlw
* source group: West Slavic languages
* target group: West Slavic languages
* OPUS readme: [zlw-zlw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-zlw/README.md)
* model: transformer
* source language(s): ces dsb hsb pol
* target language(s): ces dsb hsb pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ces-hsb.ces.hsb | 2.6 | 0.167 |
| Tatoeba-test.ces-pol.ces.pol | 44.0 | 0.649 |
| Tatoeba-test.dsb-pol.dsb.pol | 8.5 | 0.250 |
| Tatoeba-test.hsb-ces.hsb.ces | 9.6 | 0.276 |
| Tatoeba-test.multi.multi | 38.8 | 0.580 |
| Tatoeba-test.pol-ces.pol.ces | 43.4 | 0.620 |
| Tatoeba-test.pol-dsb.pol.dsb | 2.1 | 0.159 |
### System Info:
- hf_name: zlw-zlw
- source_languages: zlw
- target_languages: zlw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zlw-zlw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'cs', 'zlw']
- src_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- tgt_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-zlw/opus-2020-07-27.test.txt
- src_alpha3: zlw
- tgt_alpha3: zlw
- short_pair: zlw-zlw
- chrF2_score: 0.58
- bleu: 38.8
- brevity_penalty: 0.99
- ref_len: 7792.0
- src_name: West Slavic languages
- tgt_name: West Slavic languages
- train_date: 2020-07-27
- src_alpha2: zlw
- tgt_alpha2: zlw
- prefer_old: False
- long_pair: zlw-zlw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["pl", "cs", "zlw"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zlw-zlw | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"pl",
"cs",
"zlw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pl",
"cs",
"zlw"
] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #pl #cs #zlw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### zlw-zlw
* source group: West Slavic languages
* target group: West Slavic languages
* OPUS readme: zlw-zlw
* model: transformer
* source language(s): ces dsb hsb pol
* target language(s): ces dsb hsb pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 2.6, chr-F: 0.167
testset: URL, BLEU: 44.0, chr-F: 0.649
testset: URL, BLEU: 8.5, chr-F: 0.250
testset: URL, BLEU: 9.6, chr-F: 0.276
testset: URL, BLEU: 38.8, chr-F: 0.580
testset: URL, BLEU: 43.4, chr-F: 0.620
testset: URL, BLEU: 2.1, chr-F: 0.159
### System Info:
* hf\_name: zlw-zlw
* source\_languages: zlw
* target\_languages: zlw
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['pl', 'cs', 'zlw']
* src\_constituents: {'csb\_Latn', 'dsb', 'hsb', 'pol', 'ces'}
* tgt\_constituents: {'csb\_Latn', 'dsb', 'hsb', 'pol', 'ces'}
* src\_multilingual: True
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: zlw
* tgt\_alpha3: zlw
* short\_pair: zlw-zlw
* chrF2\_score: 0.58
* bleu: 38.8
* brevity\_penalty: 0.99
* ref\_len: 7792.0
* src\_name: West Slavic languages
* tgt\_name: West Slavic languages
* train\_date: 2020-07-27
* src\_alpha2: zlw
* tgt\_alpha2: zlw
* prefer\_old: False
* long\_pair: zlw-zlw
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### zlw-zlw\n\n\n* source group: West Slavic languages\n* target group: West Slavic languages\n* OPUS readme: zlw-zlw\n* model: transformer\n* source language(s): ces dsb hsb pol\n* target language(s): ces dsb hsb pol\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.6, chr-F: 0.167\ntestset: URL, BLEU: 44.0, chr-F: 0.649\ntestset: URL, BLEU: 8.5, chr-F: 0.250\ntestset: URL, BLEU: 9.6, chr-F: 0.276\ntestset: URL, BLEU: 38.8, chr-F: 0.580\ntestset: URL, BLEU: 43.4, chr-F: 0.620\ntestset: URL, BLEU: 2.1, chr-F: 0.159",
"### System Info:\n\n\n* hf\\_name: zlw-zlw\n* source\\_languages: zlw\n* target\\_languages: zlw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'cs', 'zlw']\n* src\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* tgt\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zlw\n* tgt\\_alpha3: zlw\n* short\\_pair: zlw-zlw\n* chrF2\\_score: 0.58\n* bleu: 38.8\n* brevity\\_penalty: 0.99\n* ref\\_len: 7792.0\n* src\\_name: West Slavic languages\n* tgt\\_name: West Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: zlw\n* tgt\\_alpha2: zlw\n* prefer\\_old: False\n* long\\_pair: zlw-zlw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #pl #cs #zlw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### zlw-zlw\n\n\n* source group: West Slavic languages\n* target group: West Slavic languages\n* OPUS readme: zlw-zlw\n* model: transformer\n* source language(s): ces dsb hsb pol\n* target language(s): ces dsb hsb pol\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.6, chr-F: 0.167\ntestset: URL, BLEU: 44.0, chr-F: 0.649\ntestset: URL, BLEU: 8.5, chr-F: 0.250\ntestset: URL, BLEU: 9.6, chr-F: 0.276\ntestset: URL, BLEU: 38.8, chr-F: 0.580\ntestset: URL, BLEU: 43.4, chr-F: 0.620\ntestset: URL, BLEU: 2.1, chr-F: 0.159",
"### System Info:\n\n\n* hf\\_name: zlw-zlw\n* source\\_languages: zlw\n* target\\_languages: zlw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'cs', 'zlw']\n* src\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* tgt\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zlw\n* tgt\\_alpha3: zlw\n* short\\_pair: zlw-zlw\n* chrF2\\_score: 0.58\n* bleu: 38.8\n* brevity\\_penalty: 0.99\n* ref\\_len: 7792.0\n* src\\_name: West Slavic languages\n* tgt\\_name: West Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: zlw\n* tgt\\_alpha2: zlw\n* prefer\\_old: False\n* long\\_pair: zlw-zlw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
310,
474
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #pl #cs #zlw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### zlw-zlw\n\n\n* source group: West Slavic languages\n* target group: West Slavic languages\n* OPUS readme: zlw-zlw\n* model: transformer\n* source language(s): ces dsb hsb pol\n* target language(s): ces dsb hsb pol\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.6, chr-F: 0.167\ntestset: URL, BLEU: 44.0, chr-F: 0.649\ntestset: URL, BLEU: 8.5, chr-F: 0.250\ntestset: URL, BLEU: 9.6, chr-F: 0.276\ntestset: URL, BLEU: 38.8, chr-F: 0.580\ntestset: URL, BLEU: 43.4, chr-F: 0.620\ntestset: URL, BLEU: 2.1, chr-F: 0.159### System Info:\n\n\n* hf\\_name: zlw-zlw\n* source\\_languages: zlw\n* target\\_languages: zlw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'cs', 'zlw']\n* src\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* tgt\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: zlw\n* tgt\\_alpha3: zlw\n* short\\_pair: zlw-zlw\n* chrF2\\_score: 0.58\n* bleu: 38.8\n* brevity\\_penalty: 0.99\n* ref\\_len: 7792.0\n* src\\_name: West Slavic languages\n* tgt\\_name: West Slavic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: zlw\n* tgt\\_alpha2: zlw\n* prefer\\_old: False\n* long\\_pair: zlw-zlw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-zne-es
* source languages: zne
* target languages: es
* OPUS readme: [zne-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.es | 21.1 | 0.382 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zne-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zne",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zne #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-zne-es
* source languages: zne
* target languages: es
* OPUS readme: zne-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.1, chr-F: 0.382
| [
"### opus-mt-zne-es\n\n\n* source languages: zne\n* target languages: es\n* OPUS readme: zne-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.1, chr-F: 0.382"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zne #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-zne-es\n\n\n* source languages: zne\n* target languages: es\n* OPUS readme: zne-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.1, chr-F: 0.382"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zne #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-zne-es\n\n\n* source languages: zne\n* target languages: es\n* OPUS readme: zne-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.1, chr-F: 0.382"
] |
translation | transformers |
### opus-mt-zne-fi
* source languages: zne
* target languages: fi
* OPUS readme: [zne-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.fi | 22.8 | 0.432 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zne-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zne",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zne #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-zne-fi
* source languages: zne
* target languages: fi
* OPUS readme: zne-fi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.8, chr-F: 0.432
| [
"### opus-mt-zne-fi\n\n\n* source languages: zne\n* target languages: fi\n* OPUS readme: zne-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.432"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zne #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-zne-fi\n\n\n* source languages: zne\n* target languages: fi\n* OPUS readme: zne-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.432"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zne #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-zne-fi\n\n\n* source languages: zne\n* target languages: fi\n* OPUS readme: zne-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.432"
] |
translation | transformers |
### opus-mt-zne-fr
* source languages: zne
* target languages: fr
* OPUS readme: [zne-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.fr | 25.3 | 0.416 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zne-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zne",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zne #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-zne-fr
* source languages: zne
* target languages: fr
* OPUS readme: zne-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.3, chr-F: 0.416
| [
"### opus-mt-zne-fr\n\n\n* source languages: zne\n* target languages: fr\n* OPUS readme: zne-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.416"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zne #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-zne-fr\n\n\n* source languages: zne\n* target languages: fr\n* OPUS readme: zne-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.416"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zne #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-zne-fr\n\n\n* source languages: zne\n* target languages: fr\n* OPUS readme: zne-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.416"
] |
translation | transformers |
### opus-mt-zne-sv
* source languages: zne
* target languages: sv
* OPUS readme: [zne-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/zne-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/zne-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/zne-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.zne.sv | 25.2 | 0.425 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-zne-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"zne",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #zne #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-zne-sv
* source languages: zne
* target languages: sv
* OPUS readme: zne-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.2, chr-F: 0.425
| [
"### opus-mt-zne-sv\n\n\n* source languages: zne\n* target languages: sv\n* OPUS readme: zne-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.425"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zne #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-zne-sv\n\n\n* source languages: zne\n* target languages: sv\n* OPUS readme: zne-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.425"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #zne #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-zne-sv\n\n\n* source languages: zne\n* target languages: sv\n* OPUS readme: zne-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.425"
] |
translation | transformers | ### af-ru
* source group: Afrikaans
* target group: Russian
* OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-09-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.zip)
* test set translations: [opus-2020-09-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.test.txt)
* test set scores: [opus-2020-09-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.rus | 38.2 | 0.580 |
### System Info:
- hf_name: af-ru
- source_languages: afr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'ru']
- src_constituents: ('Afrikaans', {'afr'})
- tgt_constituents: ('Russian', {'rus'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: afr-rus
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-09-10.test.txt
- src_alpha3: afr
- tgt_alpha3: rus
- chrF2_score: 0.58
- bleu: 38.2
- brevity_penalty: 0.992
- ref_len: 1213
- src_name: Afrikaans
- tgt_name: Russian
- train_date: 2020-01-01 00:00:00
- src_alpha2: af
- tgt_alpha2: ru
- prefer_old: False
- short_pair: af-ru
- helsinki_git_sha: e8c308a96c1bd0b4ca6a8ce174783f93c3e30f25
- transformers_git_sha: 31245775e5772fbded1ac07ed89fbba3b5af0cb9
- port_machine: LM0-400-22516.local
- port_time: 2021-02-12-14:52 | {"language": ["af", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-af-ru | null | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"af",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"af",
"ru"
] | TAGS
#transformers #pytorch #safetensors #marian #text2text-generation #translation #af #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### af-ru
* source group: Afrikaans
* target group: Russian
* OPUS readme: afr-rus
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.2, chr-F: 0.580
### System Info:
* hf\_name: af-ru
* source\_languages: afr
* target\_languages: rus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['af', 'ru']
* src\_constituents: ('Afrikaans', {'afr'})
* tgt\_constituents: ('Russian', {'rus'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: afr-rus
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: afr
* tgt\_alpha3: rus
* chrF2\_score: 0.58
* bleu: 38.2
* brevity\_penalty: 0.992
* ref\_len: 1213
* src\_name: Afrikaans
* tgt\_name: Russian
* train\_date: 2020-01-01 00:00:00
* src\_alpha2: af
* tgt\_alpha2: ru
* prefer\_old: False
* short\_pair: af-ru
* helsinki\_git\_sha: e8c308a96c1bd0b4ca6a8ce174783f93c3e30f25
* transformers\_git\_sha: 31245775e5772fbded1ac07ed89fbba3b5af0cb9
* port\_machine: URL
* port\_time: 2021-02-12-14:52
| [
"### af-ru\n\n\n* source group: Afrikaans\n* target group: Russian\n* OPUS readme: afr-rus\n* model: transformer-align\n* source language(s): afr\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.580",
"### System Info:\n\n\n* hf\\_name: af-ru\n* source\\_languages: afr\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'ru']\n* src\\_constituents: ('Afrikaans', {'afr'})\n* tgt\\_constituents: ('Russian', {'rus'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: afr-rus\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: rus\n* chrF2\\_score: 0.58\n* bleu: 38.2\n* brevity\\_penalty: 0.992\n* ref\\_len: 1213\n* src\\_name: Afrikaans\n* tgt\\_name: Russian\n* train\\_date: 2020-01-01 00:00:00\n* src\\_alpha2: af\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* short\\_pair: af-ru\n* helsinki\\_git\\_sha: e8c308a96c1bd0b4ca6a8ce174783f93c3e30f25\n* transformers\\_git\\_sha: 31245775e5772fbded1ac07ed89fbba3b5af0cb9\n* port\\_machine: URL\n* port\\_time: 2021-02-12-14:52"
] | [
"TAGS\n#transformers #pytorch #safetensors #marian #text2text-generation #translation #af #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### af-ru\n\n\n* source group: Afrikaans\n* target group: Russian\n* OPUS readme: afr-rus\n* model: transformer-align\n* source language(s): afr\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.580",
"### System Info:\n\n\n* hf\\_name: af-ru\n* source\\_languages: afr\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'ru']\n* src\\_constituents: ('Afrikaans', {'afr'})\n* tgt\\_constituents: ('Russian', {'rus'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: afr-rus\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: rus\n* chrF2\\_score: 0.58\n* bleu: 38.2\n* brevity\\_penalty: 0.992\n* ref\\_len: 1213\n* src\\_name: Afrikaans\n* tgt\\_name: Russian\n* train\\_date: 2020-01-01 00:00:00\n* src\\_alpha2: af\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* short\\_pair: af-ru\n* helsinki\\_git\\_sha: e8c308a96c1bd0b4ca6a8ce174783f93c3e30f25\n* transformers\\_git\\_sha: 31245775e5772fbded1ac07ed89fbba3b5af0cb9\n* port\\_machine: URL\n* port\\_time: 2021-02-12-14:52"
] | [
52,
132,
411
] | [
"TAGS\n#transformers #pytorch #safetensors #marian #text2text-generation #translation #af #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### af-ru\n\n\n* source group: Afrikaans\n* target group: Russian\n* OPUS readme: afr-rus\n* model: transformer-align\n* source language(s): afr\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.580### System Info:\n\n\n* hf\\_name: af-ru\n* source\\_languages: afr\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['af', 'ru']\n* src\\_constituents: ('Afrikaans', {'afr'})\n* tgt\\_constituents: ('Russian', {'rus'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: afr-rus\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: afr\n* tgt\\_alpha3: rus\n* chrF2\\_score: 0.58\n* bleu: 38.2\n* brevity\\_penalty: 0.992\n* ref\\_len: 1213\n* src\\_name: Afrikaans\n* tgt\\_name: Russian\n* train\\_date: 2020-01-01 00:00:00\n* src\\_alpha2: af\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* short\\_pair: af-ru\n* helsinki\\_git\\_sha: e8c308a96c1bd0b4ca6a8ce174783f93c3e30f25\n* transformers\\_git\\_sha: 31245775e5772fbded1ac07ed89fbba3b5af0cb9\n* port\\_machine: URL\n* port\\_time: 2021-02-12-14:52"
] |
translation | transformers | ### de-ro
* source group: German
* target group: Romanian
* OPUS readme: [deu-ron](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ron/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): mol ron
* raw source language(s): deu
* raw target language(s): mol ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* valid language labels: >>mol<< >>ron<<
* download original weights: [opusTCv20210807-2021-10-22.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.zip)
* test set translations: [opusTCv20210807-2021-10-22.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.test.txt)
* test set scores: [opusTCv20210807-2021-10-22.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test-v2021-08-07.deu-ron | 42.0 | 0.636 | 1141 | 7432 | 0.976 |
### System Info:
- hf_name: de-ro
- source_languages: deu
- target_languages: ron
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-ron/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ro']
- src_constituents: ('German', {'deu'})
- tgt_constituents: ('Romanian', {'ron'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: deu-ron
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-ron/opusTCv20210807-2021-10-22.test.txt
- src_alpha3: deu
- tgt_alpha3: ron
- chrF2_score: 0.636
- bleu: 42.0
- src_name: German
- tgt_name: Romanian
- train_date: 2021-10-22 00:00:00
- src_alpha2: de
- tgt_alpha2: ro
- prefer_old: False
- short_pair: de-ro
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e
- port_machine: LM0-400-22516.local
- port_time: 2021-11-08-16:45 | {"language": ["de", "ro"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-de-ro | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"de",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"de",
"ro"
] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #de #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### de-ro
* source group: German
* target group: Romanian
* OPUS readme: deu-ron
* model: transformer-align
* source language(s): deu
* target language(s): mol ron
* raw source language(s): deu
* raw target language(s): mol ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* valid language labels: >>mol<< >>ron<<
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
### System Info:
* hf\_name: de-ro
* source\_languages: deu
* target\_languages: ron
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['de', 'ro']
* src\_constituents: ('German', {'deu'})
* tgt\_constituents: ('Romanian', {'ron'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: deu-ron
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: deu
* tgt\_alpha3: ron
* chrF2\_score: 0.636
* bleu: 42.0
* src\_name: German
* tgt\_name: Romanian
* train\_date: 2021-10-22 00:00:00
* src\_alpha2: de
* tgt\_alpha2: ro
* prefer\_old: False
* short\_pair: de-ro
* helsinki\_git\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
* transformers\_git\_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e
* port\_machine: URL
* port\_time: 2021-11-08-16:45
| [
"### de-ro\n\n\n* source group: German\n* target group: Romanian\n* OPUS readme: deu-ron\n* model: transformer-align\n* source language(s): deu\n* target language(s): mol ron\n* raw source language(s): deu\n* raw target language(s): mol ron\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* valid language labels: >>mol<< >>ron<<\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: de-ro\n* source\\_languages: deu\n* target\\_languages: ron\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['de', 'ro']\n* src\\_constituents: ('German', {'deu'})\n* tgt\\_constituents: ('Romanian', {'ron'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: deu-ron\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: deu\n* tgt\\_alpha3: ron\n* chrF2\\_score: 0.636\n* bleu: 42.0\n* src\\_name: German\n* tgt\\_name: Romanian\n* train\\_date: 2021-10-22 00:00:00\n* src\\_alpha2: de\n* tgt\\_alpha2: ro\n* prefer\\_old: False\n* short\\_pair: de-ro\n* helsinki\\_git\\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002\n* transformers\\_git\\_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e\n* port\\_machine: URL\n* port\\_time: 2021-11-08-16:45"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #de #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### de-ro\n\n\n* source group: German\n* target group: Romanian\n* OPUS readme: deu-ron\n* model: transformer-align\n* source language(s): deu\n* target language(s): mol ron\n* raw source language(s): deu\n* raw target language(s): mol ron\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* valid language labels: >>mol<< >>ron<<\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: de-ro\n* source\\_languages: deu\n* target\\_languages: ron\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['de', 'ro']\n* src\\_constituents: ('German', {'deu'})\n* tgt\\_constituents: ('Romanian', {'ron'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: deu-ron\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: deu\n* tgt\\_alpha3: ron\n* chrF2\\_score: 0.636\n* bleu: 42.0\n* src\\_name: German\n* tgt\\_name: Romanian\n* train\\_date: 2021-10-22 00:00:00\n* src\\_alpha2: de\n* tgt\\_alpha2: ro\n* prefer\\_old: False\n* short\\_pair: de-ro\n* helsinki\\_git\\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002\n* transformers\\_git\\_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e\n* port\\_machine: URL\n* port\\_time: 2021-11-08-16:45"
] | [
55,
176,
394
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #de #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### de-ro\n\n\n* source group: German\n* target group: Romanian\n* OPUS readme: deu-ron\n* model: transformer-align\n* source language(s): deu\n* target language(s): mol ron\n* raw source language(s): deu\n* raw target language(s): mol ron\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* valid language labels: >>mol<< >>ron<<\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------### System Info:\n\n\n* hf\\_name: de-ro\n* source\\_languages: deu\n* target\\_languages: ron\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['de', 'ro']\n* src\\_constituents: ('German', {'deu'})\n* tgt\\_constituents: ('Romanian', {'ron'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: deu-ron\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: deu\n* tgt\\_alpha3: ron\n* chrF2\\_score: 0.636\n* bleu: 42.0\n* src\\_name: German\n* tgt\\_name: Romanian\n* train\\_date: 2021-10-22 00:00:00\n* src\\_alpha2: de\n* tgt\\_alpha2: ro\n* prefer\\_old: False\n* short\\_pair: de-ro\n* helsinki\\_git\\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002\n* transformers\\_git\\_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e\n* port\\_machine: URL\n* port\\_time: 2021-11-08-16:45"
] |
translation | transformers | ### en-ja
* source group: English
* target group: Japanese
* OPUS readme: [eng-jpn](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-jpn/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): jpn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip)
* test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt)
* test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test.eng-jpn | 15.2 | 0.258 | 10000 | 99206 | 1.000 |
### System Info:
- hf_name: en-ja
- source_languages: eng
- target_languages: jpn
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-jpn/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ja']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Japanese', {'jpn', 'jpn_Latn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hira', 'jpn_Hang', 'jpn_Bopo', 'jpn_Hani'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-jpn
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-jpn/opus+bt-2021-04-10.test.txt
- src_alpha3: eng
- tgt_alpha3: jpn
- chrF2_score: 0.258
- bleu: 15.2
- src_name: English
- tgt_name: Japanese
- train_date: 2021-04-10 00:00:00
- src_alpha2: en
- tgt_alpha2: ja
- prefer_old: False
- short_pair: en-ja
- helsinki_git_sha: 70b0a9621f054ef1d8ea81f7d55595d7f64d19ff
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-10-12-11:13 | {"language": ["en", "ja"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-en-ja | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ja"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### en-ja
* source group: English
* target group: Japanese
* OPUS readme: eng-jpn
* model: transformer-align
* source language(s): eng
* target language(s): jpn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: opus+URL
* test set translations: opus+URL
* test set scores: opus+URL
Benchmarks
----------
### System Info:
* hf\_name: en-ja
* source\_languages: eng
* target\_languages: jpn
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'ja']
* src\_constituents: ('English', {'eng'})
* tgt\_constituents: ('Japanese', {'jpn', 'jpn\_Latn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hira', 'jpn\_Hang', 'jpn\_Bopo', 'jpn\_Hani'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: eng-jpn
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: jpn
* chrF2\_score: 0.258
* bleu: 15.2
* src\_name: English
* tgt\_name: Japanese
* train\_date: 2021-04-10 00:00:00
* src\_alpha2: en
* tgt\_alpha2: ja
* prefer\_old: False
* short\_pair: en-ja
* helsinki\_git\_sha: 70b0a9621f054ef1d8ea81f7d55595d7f64d19ff
* transformers\_git\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
* port\_machine: URL
* port\_time: 2021-10-12-11:13
| [
"### en-ja\n\n\n* source group: English\n* target group: Japanese\n* OPUS readme: eng-jpn\n* model: transformer-align\n* source language(s): eng\n* target language(s): jpn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: en-ja\n* source\\_languages: eng\n* target\\_languages: jpn\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ja']\n* src\\_constituents: ('English', {'eng'})\n* tgt\\_constituents: ('Japanese', {'jpn', 'jpn\\_Latn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hira', 'jpn\\_Hang', 'jpn\\_Bopo', 'jpn\\_Hani'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: eng-jpn\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: jpn\n* chrF2\\_score: 0.258\n* bleu: 15.2\n* src\\_name: English\n* tgt\\_name: Japanese\n* train\\_date: 2021-04-10 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: ja\n* prefer\\_old: False\n* short\\_pair: en-ja\n* helsinki\\_git\\_sha: 70b0a9621f054ef1d8ea81f7d55595d7f64d19ff\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-10-12-11:13"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### en-ja\n\n\n* source group: English\n* target group: Japanese\n* OPUS readme: eng-jpn\n* model: transformer-align\n* source language(s): eng\n* target language(s): jpn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: en-ja\n* source\\_languages: eng\n* target\\_languages: jpn\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ja']\n* src\\_constituents: ('English', {'eng'})\n* tgt\\_constituents: ('Japanese', {'jpn', 'jpn\\_Latn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hira', 'jpn\\_Hang', 'jpn\\_Bopo', 'jpn\\_Hani'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: eng-jpn\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: jpn\n* chrF2\\_score: 0.258\n* bleu: 15.2\n* src\\_name: English\n* tgt\\_name: Japanese\n* train\\_date: 2021-04-10 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: ja\n* prefer\\_old: False\n* short\\_pair: en-ja\n* helsinki\\_git\\_sha: 70b0a9621f054ef1d8ea81f7d55595d7f64d19ff\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-10-12-11:13"
] | [
51,
116,
456
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### en-ja\n\n\n* source group: English\n* target group: Japanese\n* OPUS readme: eng-jpn\n* model: transformer-align\n* source language(s): eng\n* target language(s): jpn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------### System Info:\n\n\n* hf\\_name: en-ja\n* source\\_languages: eng\n* target\\_languages: jpn\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ja']\n* src\\_constituents: ('English', {'eng'})\n* tgt\\_constituents: ('Japanese', {'jpn', 'jpn\\_Latn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hira', 'jpn\\_Hang', 'jpn\\_Bopo', 'jpn\\_Hani'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: eng-jpn\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: jpn\n* chrF2\\_score: 0.258\n* bleu: 15.2\n* src\\_name: English\n* tgt\\_name: Japanese\n* train\\_date: 2021-04-10 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: ja\n* prefer\\_old: False\n* short\\_pair: en-ja\n* helsinki\\_git\\_sha: 70b0a9621f054ef1d8ea81f7d55595d7f64d19ff\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-10-12-11:13"
] |
translation | transformers | ### en-ro
* source group: English
* target group: Romanian
* OPUS readme: [eng-ron](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ron/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): mol ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* valid language labels:
* download original weights: [opus+bt-2021-03-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.zip)
* test set translations: [opus+bt-2021-03-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.test.txt)
* test set scores: [opus+bt-2021-03-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2016-enro.eng-ron | 33.5 | 0.610 | 1999 | 51566 | 0.984 |
| newstest2016-enro.eng-ron | 31.7 | 0.591 | 1999 | 49094 | 0.998 |
| Tatoeba-test.eng-ron | 46.9 | 0.678 | 5000 | 36851 | 0.983 |
### System Info:
- hf_name: en-ro
- source_languages: eng
- target_languages: ron
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ron/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ro']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Romanian', {'ron'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-ron
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ron/opus+bt-2021-03-07.test.txt
- src_alpha3: eng
- tgt_alpha3: ron
- chrF2_score: 0.678
- bleu: 46.9
- src_name: English
- tgt_name: Romanian
- train_date: 2021-03-07 00:00:00
- src_alpha2: en
- tgt_alpha2: ro
- prefer_old: False
- short_pair: en-ro
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-11-08-09:31 | {"language": ["en", "ro"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-en-ro | null | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ro"
] | TAGS
#transformers #pytorch #safetensors #marian #text2text-generation #translation #en #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### en-ro
* source group: English
* target group: Romanian
* OPUS readme: eng-ron
* model: transformer-align
* source language(s): eng
* target language(s): mol ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* valid language labels:
* download original weights: opus+URL
* test set translations: opus+URL
* test set scores: opus+URL
Benchmarks
----------
### System Info:
* hf\_name: en-ro
* source\_languages: eng
* target\_languages: ron
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'ro']
* src\_constituents: ('English', {'eng'})
* tgt\_constituents: ('Romanian', {'ron'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: eng-ron
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: ron
* chrF2\_score: 0.678
* bleu: 46.9
* src\_name: English
* tgt\_name: Romanian
* train\_date: 2021-03-07 00:00:00
* src\_alpha2: en
* tgt\_alpha2: ro
* prefer\_old: False
* short\_pair: en-ro
* helsinki\_git\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
* transformers\_git\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
* port\_machine: URL
* port\_time: 2021-11-08-09:31
| [
"### en-ro\n\n\n* source group: English\n* target group: Romanian\n* OPUS readme: eng-ron\n* model: transformer-align\n* source language(s): eng\n* target language(s): mol ron\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* valid language labels:\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: en-ro\n* source\\_languages: eng\n* target\\_languages: ron\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ro']\n* src\\_constituents: ('English', {'eng'})\n* tgt\\_constituents: ('Romanian', {'ron'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: eng-ron\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: ron\n* chrF2\\_score: 0.678\n* bleu: 46.9\n* src\\_name: English\n* tgt\\_name: Romanian\n* train\\_date: 2021-03-07 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: ro\n* prefer\\_old: False\n* short\\_pair: en-ro\n* helsinki\\_git\\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-11-08-09:31"
] | [
"TAGS\n#transformers #pytorch #safetensors #marian #text2text-generation #translation #en #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### en-ro\n\n\n* source group: English\n* target group: Romanian\n* OPUS readme: eng-ron\n* model: transformer-align\n* source language(s): eng\n* target language(s): mol ron\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* valid language labels:\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: en-ro\n* source\\_languages: eng\n* target\\_languages: ron\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ro']\n* src\\_constituents: ('English', {'eng'})\n* tgt\\_constituents: ('Romanian', {'ron'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: eng-ron\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: ron\n* chrF2\\_score: 0.678\n* bleu: 46.9\n* src\\_name: English\n* tgt\\_name: Romanian\n* train\\_date: 2021-03-07 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: ro\n* prefer\\_old: False\n* short\\_pair: en-ro\n* helsinki\\_git\\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-11-08-09:31"
] | [
52,
148,
391
] | [
"TAGS\n#transformers #pytorch #safetensors #marian #text2text-generation #translation #en #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### en-ro\n\n\n* source group: English\n* target group: Romanian\n* OPUS readme: eng-ron\n* model: transformer-align\n* source language(s): eng\n* target language(s): mol ron\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* valid language labels:\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------### System Info:\n\n\n* hf\\_name: en-ro\n* source\\_languages: eng\n* target\\_languages: ron\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ro']\n* src\\_constituents: ('English', {'eng'})\n* tgt\\_constituents: ('Romanian', {'ron'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: eng-ron\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: ron\n* chrF2\\_score: 0.678\n* bleu: 46.9\n* src\\_name: English\n* tgt\\_name: Romanian\n* train\\_date: 2021-03-07 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: ro\n* prefer\\_old: False\n* short\\_pair: en-ro\n* helsinki\\_git\\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-11-08-09:31"
] |
translation | transformers | ### en-tr
* source group: English
* target group: Turkish
* OPUS readme: [eng-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tur/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus+bt-2021-04-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.zip)
* test set translations: [opus+bt-2021-04-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.test.txt)
* test set scores: [opus+bt-2021-04-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2016-entr.eng-tur | 21.5 | 0.575 | 1001 | 16127 | 1.000 |
| newstest2016-entr.eng-tur | 21.4 | 0.558 | 3000 | 50782 | 0.986 |
| newstest2017-entr.eng-tur | 22.8 | 0.572 | 3007 | 51977 | 0.960 |
| newstest2018-entr.eng-tur | 20.8 | 0.561 | 3000 | 53731 | 0.963 |
| Tatoeba-test.eng-tur | 41.5 | 0.684 | 10000 | 60469 | 0.932 |
### System Info:
- hf_name: en-tr
- source_languages: eng
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tr']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Turkish', {'tur'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-tur
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opus+bt-2021-04-10.test.txt
- src_alpha3: eng
- tgt_alpha3: tur
- chrF2_score: 0.684
- bleu: 41.5
- src_name: English
- tgt_name: Turkish
- train_date: 2021-04-10 00:00:00
- src_alpha2: en
- tgt_alpha2: tr
- prefer_old: False
- short_pair: en-tr
- helsinki_git_sha: a6bd0607aec9603811b2b635aec3f566f3add79d
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-10-05-12:13 | {"language": ["en", "tr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-en-tr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"tr"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### en-tr
* source group: English
* target group: Turkish
* OPUS readme: eng-tur
* model: transformer-align
* source language(s): eng
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: opus+URL
* test set translations: opus+URL
* test set scores: opus+URL
Benchmarks
----------
### System Info:
* hf\_name: en-tr
* source\_languages: eng
* target\_languages: tur
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'tr']
* src\_constituents: ('English', {'eng'})
* tgt\_constituents: ('Turkish', {'tur'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: eng-tur
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: tur
* chrF2\_score: 0.684
* bleu: 41.5
* src\_name: English
* tgt\_name: Turkish
* train\_date: 2021-04-10 00:00:00
* src\_alpha2: en
* tgt\_alpha2: tr
* prefer\_old: False
* short\_pair: en-tr
* helsinki\_git\_sha: a6bd0607aec9603811b2b635aec3f566f3add79d
* transformers\_git\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
* port\_machine: URL
* port\_time: 2021-10-05-12:13
| [
"### en-tr\n\n\n* source group: English\n* target group: Turkish\n* OPUS readme: eng-tur\n* model: transformer-align\n* source language(s): eng\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: en-tr\n* source\\_languages: eng\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'tr']\n* src\\_constituents: ('English', {'eng'})\n* tgt\\_constituents: ('Turkish', {'tur'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: eng-tur\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: tur\n* chrF2\\_score: 0.684\n* bleu: 41.5\n* src\\_name: English\n* tgt\\_name: Turkish\n* train\\_date: 2021-04-10 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* short\\_pair: en-tr\n* helsinki\\_git\\_sha: a6bd0607aec9603811b2b635aec3f566f3add79d\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-10-05-12:13"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### en-tr\n\n\n* source group: English\n* target group: Turkish\n* OPUS readme: eng-tur\n* model: transformer-align\n* source language(s): eng\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: en-tr\n* source\\_languages: eng\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'tr']\n* src\\_constituents: ('English', {'eng'})\n* tgt\\_constituents: ('Turkish', {'tur'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: eng-tur\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: tur\n* chrF2\\_score: 0.684\n* bleu: 41.5\n* src\\_name: English\n* tgt\\_name: Turkish\n* train\\_date: 2021-04-10 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* short\\_pair: en-tr\n* helsinki\\_git\\_sha: a6bd0607aec9603811b2b635aec3f566f3add79d\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-10-05-12:13"
] | [
51,
116,
395
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### en-tr\n\n\n* source group: English\n* target group: Turkish\n* OPUS readme: eng-tur\n* model: transformer-align\n* source language(s): eng\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------### System Info:\n\n\n* hf\\_name: en-tr\n* source\\_languages: eng\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'tr']\n* src\\_constituents: ('English', {'eng'})\n* tgt\\_constituents: ('Turkish', {'tur'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: eng-tur\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: tur\n* chrF2\\_score: 0.684\n* bleu: 41.5\n* src\\_name: English\n* tgt\\_name: Turkish\n* train\\_date: 2021-04-10 00:00:00\n* src\\_alpha2: en\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* short\\_pair: en-tr\n* helsinki\\_git\\_sha: a6bd0607aec9603811b2b635aec3f566f3add79d\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-10-05-12:13"
] |
translation | transformers | ### es-zh
* source group: Spanish
* target group: Chinese
* OPUS readme: [spa-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-zho/README.md)
* model: transformer
* source language(s): spa
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant hsn hsn_Hani lzh nan wuu yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2021-01-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.zip)
* test set translations: [opus-2021-01-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.test.txt)
* test set scores: [opus-2021-01-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.zho | 38.8 | 0.324 |
### System Info:
- hf_name: es-zh
- source_languages: spa
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'zh']
- src_constituents: ('Spanish', {'spa'})
- tgt_constituents: ('Chinese', {'wuu_Bopo', 'wuu', 'cmn_Hang', 'lzh_Kana', 'lzh', 'wuu_Hani', 'lzh_Yiii', 'yue_Hans', 'cmn_Hani', 'cjy_Hans', 'cmn_Hans', 'cmn_Kana', 'zho_Hans', 'zho_Hant', 'yue', 'cmn_Bopo', 'yue_Hang', 'lzh_Hans', 'wuu_Latn', 'yue_Hant', 'hak_Hani', 'lzh_Bopo', 'cmn_Hant', 'lzh_Hani', 'lzh_Hang', 'cmn', 'lzh_Hira', 'yue_Bopo', 'yue_Hani', 'gan', 'zho', 'cmn_Yiii', 'yue_Hira', 'cmn_Latn', 'yue_Kana', 'cjy_Hant', 'cmn_Hira', 'nan_Hani', 'nan'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: spa-zho
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.test.txt
- src_alpha3: spa
- tgt_alpha3: zho
- chrF2_score: 0.324
- bleu: 38.8
- brevity_penalty: 0.878
- ref_len: 22762.0
- src_name: Spanish
- tgt_name: Chinese
- train_date: 2021-01-04 00:00:00
- src_alpha2: es
- tgt_alpha2: zh
- prefer_old: False
- short_pair: es-zh
- helsinki_git_sha: dfdcef114ffb8a8dbb7a3fcf84bde5af50309500
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2021-01-04-18:53 | {"language": ["es", "zh"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-es-zh | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es",
"zh"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### es-zh
* source group: Spanish
* target group: Chinese
* OPUS readme: spa-zho
* model: transformer
* source language(s): spa
* target language(s): cjy\_Hans cjy\_Hant cmn cmn\_Hans cmn\_Hant hsn hsn\_Hani lzh nan wuu yue\_Hans yue\_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.8, chr-F: 0.324
### System Info:
* hf\_name: es-zh
* source\_languages: spa
* target\_languages: zho
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['es', 'zh']
* src\_constituents: ('Spanish', {'spa'})
* tgt\_constituents: ('Chinese', {'wuu\_Bopo', 'wuu', 'cmn\_Hang', 'lzh\_Kana', 'lzh', 'wuu\_Hani', 'lzh\_Yiii', 'yue\_Hans', 'cmn\_Hani', 'cjy\_Hans', 'cmn\_Hans', 'cmn\_Kana', 'zho\_Hans', 'zho\_Hant', 'yue', 'cmn\_Bopo', 'yue\_Hang', 'lzh\_Hans', 'wuu\_Latn', 'yue\_Hant', 'hak\_Hani', 'lzh\_Bopo', 'cmn\_Hant', 'lzh\_Hani', 'lzh\_Hang', 'cmn', 'lzh\_Hira', 'yue\_Bopo', 'yue\_Hani', 'gan', 'zho', 'cmn\_Yiii', 'yue\_Hira', 'cmn\_Latn', 'yue\_Kana', 'cjy\_Hant', 'cmn\_Hira', 'nan\_Hani', 'nan'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: spa-zho
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: spa
* tgt\_alpha3: zho
* chrF2\_score: 0.324
* bleu: 38.8
* brevity\_penalty: 0.878
* ref\_len: 22762.0
* src\_name: Spanish
* tgt\_name: Chinese
* train\_date: 2021-01-04 00:00:00
* src\_alpha2: es
* tgt\_alpha2: zh
* prefer\_old: False
* short\_pair: es-zh
* helsinki\_git\_sha: dfdcef114ffb8a8dbb7a3fcf84bde5af50309500
* transformers\_git\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
* port\_machine: URL
* port\_time: 2021-01-04-18:53
| [
"### es-zh\n\n\n* source group: Spanish\n* target group: Chinese\n* OPUS readme: spa-zho\n* model: transformer\n* source language(s): spa\n* target language(s): cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant hsn hsn\\_Hani lzh nan wuu yue\\_Hans yue\\_Hant\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.8, chr-F: 0.324",
"### System Info:\n\n\n* hf\\_name: es-zh\n* source\\_languages: spa\n* target\\_languages: zho\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'zh']\n* src\\_constituents: ('Spanish', {'spa'})\n* tgt\\_constituents: ('Chinese', {'wuu\\_Bopo', 'wuu', 'cmn\\_Hang', 'lzh\\_Kana', 'lzh', 'wuu\\_Hani', 'lzh\\_Yiii', 'yue\\_Hans', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn\\_Hans', 'cmn\\_Kana', 'zho\\_Hans', 'zho\\_Hant', 'yue', 'cmn\\_Bopo', 'yue\\_Hang', 'lzh\\_Hans', 'wuu\\_Latn', 'yue\\_Hant', 'hak\\_Hani', 'lzh\\_Bopo', 'cmn\\_Hant', 'lzh\\_Hani', 'lzh\\_Hang', 'cmn', 'lzh\\_Hira', 'yue\\_Bopo', 'yue\\_Hani', 'gan', 'zho', 'cmn\\_Yiii', 'yue\\_Hira', 'cmn\\_Latn', 'yue\\_Kana', 'cjy\\_Hant', 'cmn\\_Hira', 'nan\\_Hani', 'nan'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: spa-zho\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: zho\n* chrF2\\_score: 0.324\n* bleu: 38.8\n* brevity\\_penalty: 0.878\n* ref\\_len: 22762.0\n* src\\_name: Spanish\n* tgt\\_name: Chinese\n* train\\_date: 2021-01-04 00:00:00\n* src\\_alpha2: es\n* tgt\\_alpha2: zh\n* prefer\\_old: False\n* short\\_pair: es-zh\n* helsinki\\_git\\_sha: dfdcef114ffb8a8dbb7a3fcf84bde5af50309500\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2021-01-04-18:53"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### es-zh\n\n\n* source group: Spanish\n* target group: Chinese\n* OPUS readme: spa-zho\n* model: transformer\n* source language(s): spa\n* target language(s): cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant hsn hsn\\_Hani lzh nan wuu yue\\_Hans yue\\_Hant\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.8, chr-F: 0.324",
"### System Info:\n\n\n* hf\\_name: es-zh\n* source\\_languages: spa\n* target\\_languages: zho\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'zh']\n* src\\_constituents: ('Spanish', {'spa'})\n* tgt\\_constituents: ('Chinese', {'wuu\\_Bopo', 'wuu', 'cmn\\_Hang', 'lzh\\_Kana', 'lzh', 'wuu\\_Hani', 'lzh\\_Yiii', 'yue\\_Hans', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn\\_Hans', 'cmn\\_Kana', 'zho\\_Hans', 'zho\\_Hant', 'yue', 'cmn\\_Bopo', 'yue\\_Hang', 'lzh\\_Hans', 'wuu\\_Latn', 'yue\\_Hant', 'hak\\_Hani', 'lzh\\_Bopo', 'cmn\\_Hant', 'lzh\\_Hani', 'lzh\\_Hang', 'cmn', 'lzh\\_Hira', 'yue\\_Bopo', 'yue\\_Hani', 'gan', 'zho', 'cmn\\_Yiii', 'yue\\_Hira', 'cmn\\_Latn', 'yue\\_Kana', 'cjy\\_Hant', 'cmn\\_Hira', 'nan\\_Hani', 'nan'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: spa-zho\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: zho\n* chrF2\\_score: 0.324\n* bleu: 38.8\n* brevity\\_penalty: 0.878\n* ref\\_len: 22762.0\n* src\\_name: Spanish\n* tgt\\_name: Chinese\n* train\\_date: 2021-01-04 00:00:00\n* src\\_alpha2: es\n* tgt\\_alpha2: zh\n* prefer\\_old: False\n* short\\_pair: es-zh\n* helsinki\\_git\\_sha: dfdcef114ffb8a8dbb7a3fcf84bde5af50309500\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2021-01-04-18:53"
] | [
52,
202,
719
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### es-zh\n\n\n* source group: Spanish\n* target group: Chinese\n* OPUS readme: spa-zho\n* model: transformer\n* source language(s): spa\n* target language(s): cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant hsn hsn\\_Hani lzh nan wuu yue\\_Hans yue\\_Hant\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.8, chr-F: 0.324### System Info:\n\n\n* hf\\_name: es-zh\n* source\\_languages: spa\n* target\\_languages: zho\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'zh']\n* src\\_constituents: ('Spanish', {'spa'})\n* tgt\\_constituents: ('Chinese', {'wuu\\_Bopo', 'wuu', 'cmn\\_Hang', 'lzh\\_Kana', 'lzh', 'wuu\\_Hani', 'lzh\\_Yiii', 'yue\\_Hans', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn\\_Hans', 'cmn\\_Kana', 'zho\\_Hans', 'zho\\_Hant', 'yue', 'cmn\\_Bopo', 'yue\\_Hang', 'lzh\\_Hans', 'wuu\\_Latn', 'yue\\_Hant', 'hak\\_Hani', 'lzh\\_Bopo', 'cmn\\_Hant', 'lzh\\_Hani', 'lzh\\_Hang', 'cmn', 'lzh\\_Hira', 'yue\\_Bopo', 'yue\\_Hani', 'gan', 'zho', 'cmn\\_Yiii', 'yue\\_Hira', 'cmn\\_Latn', 'yue\\_Kana', 'cjy\\_Hant', 'cmn\\_Hira', 'nan\\_Hani', 'nan'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: spa-zho\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: zho\n* chrF2\\_score: 0.324\n* bleu: 38.8\n* brevity\\_penalty: 0.878\n* ref\\_len: 22762.0\n* src\\_name: Spanish\n* tgt\\_name: Chinese\n* train\\_date: 2021-01-04 00:00:00\n* src\\_alpha2: es\n* tgt\\_alpha2: zh\n* prefer\\_old: False\n* short\\_pair: es-zh\n* helsinki\\_git\\_sha: dfdcef114ffb8a8dbb7a3fcf84bde5af50309500\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2021-01-04-18:53"
] |
translation | transformers | ### fi-en
* source group: Finnish
* target group: English
* OPUS readme: [fin-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md)
* model: transformer-align
* source language(s): fin
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807+bt-2021-08-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip)
* test set translations: [opusTCv20210807+bt-2021-08-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt)
* test set scores: [opusTCv20210807+bt-2021-08-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newsdev2015-enfi.fin-eng | 27.1 | 0.550 | 1500 | 32104 | 0.988 |
| newstest2015-enfi.fin-eng | 28.5 | 0.560 | 1370 | 27356 | 0.980 |
| newstest2016-enfi.fin-eng | 31.7 | 0.586 | 3000 | 63043 | 1.000 |
| newstest2017-enfi.fin-eng | 34.6 | 0.610 | 3002 | 61936 | 0.988 |
| newstest2018-enfi.fin-eng | 25.4 | 0.530 | 3000 | 62325 | 0.981 |
| newstest2019-fien.fin-eng | 30.6 | 0.577 | 1996 | 36227 | 0.994 |
| newstestB2016-enfi.fin-eng | 25.8 | 0.538 | 3000 | 63043 | 0.987 |
| newstestB2017-enfi.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 |
| newstestB2017-fien.fin-eng | 29.6 | 0.572 | 3002 | 61936 | 0.999 |
| Tatoeba-test-v2021-08-07.fin-eng | 54.1 | 0.700 | 10000 | 75212 | 0.988 |
### System Info:
- hf_name: fi-en
- source_languages: fin
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fi', 'en']
- src_constituents: ('Finnish', {'fin'})
- tgt_constituents: ('English', {'eng'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fin-eng
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-08-25.test.txt
- src_alpha3: fin
- tgt_alpha3: eng
- chrF2_score: 0.7
- bleu: 54.1
- src_name: Finnish
- tgt_name: English
- train_date: 2021-08-25 00:00:00
- src_alpha2: fi
- tgt_alpha2: en
- prefer_old: False
- short_pair: fi-en
- helsinki_git_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
- transformers_git_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
- port_machine: LM0-400-22516.local
- port_time: 2021-11-04-21:36 | {"language": ["fi", "en"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-fi-en | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"fi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fi",
"en"
] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #fi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### fi-en
* source group: Finnish
* target group: English
* OPUS readme: fin-eng
* model: transformer-align
* source language(s): fin
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: opusTCv20210807+URL
* test set translations: opusTCv20210807+URL
* test set scores: opusTCv20210807+URL
Benchmarks
----------
### System Info:
* hf\_name: fi-en
* source\_languages: fin
* target\_languages: eng
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fi', 'en']
* src\_constituents: ('Finnish', {'fin'})
* tgt\_constituents: ('English', {'eng'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: fin-eng
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: fin
* tgt\_alpha3: eng
* chrF2\_score: 0.7
* bleu: 54.1
* src\_name: Finnish
* tgt\_name: English
* train\_date: 2021-08-25 00:00:00
* src\_alpha2: fi
* tgt\_alpha2: en
* prefer\_old: False
* short\_pair: fi-en
* helsinki\_git\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002
* transformers\_git\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b
* port\_machine: URL
* port\_time: 2021-11-04-21:36
| [
"### fi-en\n\n\n* source group: Finnish\n* target group: English\n* OPUS readme: fin-eng\n* model: transformer-align\n* source language(s): fin\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: opusTCv20210807+URL\n* test set translations: opusTCv20210807+URL\n* test set scores: opusTCv20210807+URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: fi-en\n* source\\_languages: fin\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fi', 'en']\n* src\\_constituents: ('Finnish', {'fin'})\n* tgt\\_constituents: ('English', {'eng'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: fin-eng\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fin\n* tgt\\_alpha3: eng\n* chrF2\\_score: 0.7\n* bleu: 54.1\n* src\\_name: Finnish\n* tgt\\_name: English\n* train\\_date: 2021-08-25 00:00:00\n* src\\_alpha2: fi\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* short\\_pair: fi-en\n* helsinki\\_git\\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-11-04-21:36"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #fi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### fi-en\n\n\n* source group: Finnish\n* target group: English\n* OPUS readme: fin-eng\n* model: transformer-align\n* source language(s): fin\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: opusTCv20210807+URL\n* test set translations: opusTCv20210807+URL\n* test set scores: opusTCv20210807+URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: fi-en\n* source\\_languages: fin\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fi', 'en']\n* src\\_constituents: ('Finnish', {'fin'})\n* tgt\\_constituents: ('English', {'eng'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: fin-eng\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fin\n* tgt\\_alpha3: eng\n* chrF2\\_score: 0.7\n* bleu: 54.1\n* src\\_name: Finnish\n* tgt\\_name: English\n* train\\_date: 2021-08-25 00:00:00\n* src\\_alpha2: fi\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* short\\_pair: fi-en\n* helsinki\\_git\\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-11-04-21:36"
] | [
55,
135,
390
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #fi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### fi-en\n\n\n* source group: Finnish\n* target group: English\n* OPUS readme: fin-eng\n* model: transformer-align\n* source language(s): fin\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: opusTCv20210807+URL\n* test set translations: opusTCv20210807+URL\n* test set scores: opusTCv20210807+URL\n\n\nBenchmarks\n----------### System Info:\n\n\n* hf\\_name: fi-en\n* source\\_languages: fin\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fi', 'en']\n* src\\_constituents: ('Finnish', {'fin'})\n* tgt\\_constituents: ('English', {'eng'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: fin-eng\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fin\n* tgt\\_alpha3: eng\n* chrF2\\_score: 0.7\n* bleu: 54.1\n* src\\_name: Finnish\n* tgt\\_name: English\n* train\\_date: 2021-08-25 00:00:00\n* src\\_alpha2: fi\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* short\\_pair: fi-en\n* helsinki\\_git\\_sha: 2ef219d5b67f0afb0c6b732cd07001d84181f002\n* transformers\\_git\\_sha: 12b4d66a80419db30a15e7b9d4208ceb9887c03b\n* port\\_machine: URL\n* port\\_time: 2021-11-04-21:36"
] |
translation | transformers | ### fr-it
* source group: French
* target group: Italian
* OPUS readme: [fra-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md)
* model: transformer-align
* source language(s): fra
* target language(s): ita
* raw source language(s): fra
* raw target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807-2021-11-11.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip)
* test set translations: [opusTCv20210807-2021-11-11.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt)
* test set scores: [opusTCv20210807-2021-11-11.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test-v2021-08-07.fra-ita | 54.8 | 0.737 | 10000 | 61517 | 0.953 |
### System Info:
- hf_name: fr-it
- source_languages: fra
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'it']
- src_constituents: ('French', {'fra'})
- tgt_constituents: ('Italian', {'ita'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: fra-ita
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ita/opusTCv20210807-2021-11-11.test.txt
- src_alpha3: fra
- tgt_alpha3: ita
- chrF2_score: 0.737
- bleu: 54.8
- src_name: French
- tgt_name: Italian
- train_date: 2021-11-11 00:00:00
- src_alpha2: fr
- tgt_alpha2: it
- prefer_old: False
- short_pair: fr-it
- helsinki_git_sha: 7ab0c987850187e0b10342bfc616cd47c027ba18
- transformers_git_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e
- port_machine: LM0-400-22516.local
- port_time: 2021-11-11-19:40 | {"language": ["fr", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-fr-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fr",
"it"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #fr #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### fr-it
* source group: French
* target group: Italian
* OPUS readme: fra-ita
* model: transformer-align
* source language(s): fra
* target language(s): ita
* raw source language(s): fra
* raw target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
### System Info:
* hf\_name: fr-it
* source\_languages: fra
* target\_languages: ita
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['fr', 'it']
* src\_constituents: ('French', {'fra'})
* tgt\_constituents: ('Italian', {'ita'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: fra-ita
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: fra
* tgt\_alpha3: ita
* chrF2\_score: 0.737
* bleu: 54.8
* src\_name: French
* tgt\_name: Italian
* train\_date: 2021-11-11 00:00:00
* src\_alpha2: fr
* tgt\_alpha2: it
* prefer\_old: False
* short\_pair: fr-it
* helsinki\_git\_sha: 7ab0c987850187e0b10342bfc616cd47c027ba18
* transformers\_git\_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e
* port\_machine: URL
* port\_time: 2021-11-11-19:40
| [
"### fr-it\n\n\n* source group: French\n* target group: Italian\n* OPUS readme: fra-ita\n* model: transformer-align\n* source language(s): fra\n* target language(s): ita\n* raw source language(s): fra\n* raw target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: fr-it\n* source\\_languages: fra\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'it']\n* src\\_constituents: ('French', {'fra'})\n* tgt\\_constituents: ('Italian', {'ita'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: fra-ita\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: ita\n* chrF2\\_score: 0.737\n* bleu: 54.8\n* src\\_name: French\n* tgt\\_name: Italian\n* train\\_date: 2021-11-11 00:00:00\n* src\\_alpha2: fr\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* short\\_pair: fr-it\n* helsinki\\_git\\_sha: 7ab0c987850187e0b10342bfc616cd47c027ba18\n* transformers\\_git\\_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e\n* port\\_machine: URL\n* port\\_time: 2021-11-11-19:40"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### fr-it\n\n\n* source group: French\n* target group: Italian\n* OPUS readme: fra-ita\n* model: transformer-align\n* source language(s): fra\n* target language(s): ita\n* raw source language(s): fra\n* raw target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------",
"### System Info:\n\n\n* hf\\_name: fr-it\n* source\\_languages: fra\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'it']\n* src\\_constituents: ('French', {'fra'})\n* tgt\\_constituents: ('Italian', {'ita'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: fra-ita\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: ita\n* chrF2\\_score: 0.737\n* bleu: 54.8\n* src\\_name: French\n* tgt\\_name: Italian\n* train\\_date: 2021-11-11 00:00:00\n* src\\_alpha2: fr\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* short\\_pair: fr-it\n* helsinki\\_git\\_sha: 7ab0c987850187e0b10342bfc616cd47c027ba18\n* transformers\\_git\\_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e\n* port\\_machine: URL\n* port\\_time: 2021-11-11-19:40"
] | [
51,
129,
390
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #fr #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### fr-it\n\n\n* source group: French\n* target group: Italian\n* OPUS readme: fra-ita\n* model: transformer-align\n* source language(s): fra\n* target language(s): ita\n* raw source language(s): fra\n* raw target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------### System Info:\n\n\n* hf\\_name: fr-it\n* source\\_languages: fra\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['fr', 'it']\n* src\\_constituents: ('French', {'fra'})\n* tgt\\_constituents: ('Italian', {'ita'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: fra-ita\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: fra\n* tgt\\_alpha3: ita\n* chrF2\\_score: 0.737\n* bleu: 54.8\n* src\\_name: French\n* tgt\\_name: Italian\n* train\\_date: 2021-11-11 00:00:00\n* src\\_alpha2: fr\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* short\\_pair: fr-it\n* helsinki\\_git\\_sha: 7ab0c987850187e0b10342bfc616cd47c027ba18\n* transformers\\_git\\_sha: df1f94eb4a18b1a27d27e32040b60a17410d516e\n* port\\_machine: URL\n* port\\_time: 2021-11-11-19:40"
] |
translation | transformers | ### he-fr
* source group: Hebrew
* target group: French
* OPUS readme: [heb-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-fra/README.md)
* model: transformer
* source language(s): heb
* target language(s): fra
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.fra | 47.3 | 0.644 |
### System Info:
- hf_name: he-fr
- source_languages: heb
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'fr']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('French', {'fra'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-fra
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-fra/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: fra
- chrF2_score: 0.644
- bleu: 47.3
- brevity_penalty: 0.9740000000000001
- ref_len: 26123.0
- src_name: Hebrew
- tgt_name: French
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: fr
- prefer_old: False
- short_pair: he-fr
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:03 | {"language": ["he", "fr"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-he-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"he",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"he",
"fr"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #he #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### he-fr
* source group: Hebrew
* target group: French
* OPUS readme: heb-fra
* model: transformer
* source language(s): heb
* target language(s): fra
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 47.3, chr-F: 0.644
### System Info:
* hf\_name: he-fr
* source\_languages: heb
* target\_languages: fra
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['he', 'fr']
* src\_constituents: ('Hebrew', {'heb'})
* tgt\_constituents: ('French', {'fra'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: heb-fra
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: heb
* tgt\_alpha3: fra
* chrF2\_score: 0.644
* bleu: 47.3
* brevity\_penalty: 0.9740000000000001
* ref\_len: 26123.0
* src\_name: Hebrew
* tgt\_name: French
* train\_date: 2020-12-10 00:00:00
* src\_alpha2: he
* tgt\_alpha2: fr
* prefer\_old: False
* short\_pair: he-fr
* helsinki\_git\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
* transformers\_git\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
* port\_machine: URL
* port\_time: 2020-12-11-16:03
| [
"### he-fr\n\n\n* source group: Hebrew\n* target group: French\n* OPUS readme: heb-fra\n* model: transformer\n* source language(s): heb\n* target language(s): fra\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.3, chr-F: 0.644",
"### System Info:\n\n\n* hf\\_name: he-fr\n* source\\_languages: heb\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'fr']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('French', {'fra'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-fra\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: fra\n* chrF2\\_score: 0.644\n* bleu: 47.3\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 26123.0\n* src\\_name: Hebrew\n* tgt\\_name: French\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* short\\_pair: he-fr\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:03"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### he-fr\n\n\n* source group: Hebrew\n* target group: French\n* OPUS readme: heb-fra\n* model: transformer\n* source language(s): heb\n* target language(s): fra\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.3, chr-F: 0.644",
"### System Info:\n\n\n* hf\\_name: he-fr\n* source\\_languages: heb\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'fr']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('French', {'fra'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-fra\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: fra\n* chrF2\\_score: 0.644\n* bleu: 47.3\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 26123.0\n* src\\_name: Hebrew\n* tgt\\_name: French\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* short\\_pair: he-fr\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:03"
] | [
51,
129,
420
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #he #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### he-fr\n\n\n* source group: Hebrew\n* target group: French\n* OPUS readme: heb-fra\n* model: transformer\n* source language(s): heb\n* target language(s): fra\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.3, chr-F: 0.644### System Info:\n\n\n* hf\\_name: he-fr\n* source\\_languages: heb\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'fr']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('French', {'fra'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-fra\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: fra\n* chrF2\\_score: 0.644\n* bleu: 47.3\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 26123.0\n* src\\_name: Hebrew\n* tgt\\_name: French\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* short\\_pair: he-fr\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:03"
] |
translation | transformers | ### he-it
* source group: Hebrew
* target group: Italian
* OPUS readme: [heb-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md)
* model: transformer
* source language(s): heb
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.heb.ita | 41.1 | 0.643 |
### System Info:
- hf_name: he-it
- source_languages: heb
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['he', 'it']
- src_constituents: ('Hebrew', {'heb'})
- tgt_constituents: ('Italian', {'ita'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: heb-ita
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-ita/opus-2020-12-10.test.txt
- src_alpha3: heb
- tgt_alpha3: ita
- chrF2_score: 0.643
- bleu: 41.1
- brevity_penalty: 0.997
- ref_len: 11464.0
- src_name: Hebrew
- tgt_name: Italian
- train_date: 2020-12-10 00:00:00
- src_alpha2: he
- tgt_alpha2: it
- prefer_old: False
- short_pair: he-it
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:01 | {"language": ["he", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-he-it | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"he",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"he",
"it"
] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #he #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### he-it
* source group: Hebrew
* target group: Italian
* OPUS readme: heb-ita
* model: transformer
* source language(s): heb
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 41.1, chr-F: 0.643
### System Info:
* hf\_name: he-it
* source\_languages: heb
* target\_languages: ita
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['he', 'it']
* src\_constituents: ('Hebrew', {'heb'})
* tgt\_constituents: ('Italian', {'ita'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: heb-ita
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: heb
* tgt\_alpha3: ita
* chrF2\_score: 0.643
* bleu: 41.1
* brevity\_penalty: 0.997
* ref\_len: 11464.0
* src\_name: Hebrew
* tgt\_name: Italian
* train\_date: 2020-12-10 00:00:00
* src\_alpha2: he
* tgt\_alpha2: it
* prefer\_old: False
* short\_pair: he-it
* helsinki\_git\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
* transformers\_git\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
* port\_machine: URL
* port\_time: 2020-12-11-16:01
| [
"### he-it\n\n\n* source group: Hebrew\n* target group: Italian\n* OPUS readme: heb-ita\n* model: transformer\n* source language(s): heb\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.1, chr-F: 0.643",
"### System Info:\n\n\n* hf\\_name: he-it\n* source\\_languages: heb\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'it']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Italian', {'ita'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-ita\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ita\n* chrF2\\_score: 0.643\n* bleu: 41.1\n* brevity\\_penalty: 0.997\n* ref\\_len: 11464.0\n* src\\_name: Hebrew\n* tgt\\_name: Italian\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* short\\_pair: he-it\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:01"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #he #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### he-it\n\n\n* source group: Hebrew\n* target group: Italian\n* OPUS readme: heb-ita\n* model: transformer\n* source language(s): heb\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.1, chr-F: 0.643",
"### System Info:\n\n\n* hf\\_name: he-it\n* source\\_languages: heb\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'it']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Italian', {'ita'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-ita\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ita\n* chrF2\\_score: 0.643\n* bleu: 41.1\n* brevity\\_penalty: 0.997\n* ref\\_len: 11464.0\n* src\\_name: Hebrew\n* tgt\\_name: Italian\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* short\\_pair: he-it\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:01"
] | [
48,
131,
418
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #he #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### he-it\n\n\n* source group: Hebrew\n* target group: Italian\n* OPUS readme: heb-ita\n* model: transformer\n* source language(s): heb\n* target language(s): ita\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.1, chr-F: 0.643### System Info:\n\n\n* hf\\_name: he-it\n* source\\_languages: heb\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['he', 'it']\n* src\\_constituents: ('Hebrew', {'heb'})\n* tgt\\_constituents: ('Italian', {'ita'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: heb-ita\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: heb\n* tgt\\_alpha3: ita\n* chrF2\\_score: 0.643\n* bleu: 41.1\n* brevity\\_penalty: 0.997\n* ref\\_len: 11464.0\n* src\\_name: Hebrew\n* tgt\\_name: Italian\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: he\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* short\\_pair: he-it\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:01"
] |
translation | transformers | ### it-he
* source group: Italian
* target group: Hebrew
* OPUS readme: [ita-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-heb/README.md)
* model: transformer
* source language(s): ita
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.heb | 38.5 | 0.593 |
### System Info:
- hf_name: it-he
- source_languages: ita
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'he']
- src_constituents: ('Italian', {'ita'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: ita-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.test.txt
- src_alpha3: ita
- tgt_alpha3: heb
- chrF2_score: 0.593
- bleu: 38.5
- brevity_penalty: 0.985
- ref_len: 9796.0
- src_name: Italian
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: it
- tgt_alpha2: he
- prefer_old: False
- short_pair: it-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:02 | {"language": ["it", "he"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-tatoeba-it-he | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"it",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it",
"he"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #it #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### it-he
* source group: Italian
* target group: Hebrew
* OPUS readme: ita-heb
* model: transformer
* source language(s): ita
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.5, chr-F: 0.593
### System Info:
* hf\_name: it-he
* source\_languages: ita
* target\_languages: heb
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['it', 'he']
* src\_constituents: ('Italian', {'ita'})
* tgt\_constituents: ('Hebrew', {'heb'})
* src\_multilingual: False
* tgt\_multilingual: False
* long\_pair: ita-heb
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: ita
* tgt\_alpha3: heb
* chrF2\_score: 0.593
* bleu: 38.5
* brevity\_penalty: 0.985
* ref\_len: 9796.0
* src\_name: Italian
* tgt\_name: Hebrew
* train\_date: 2020-12-10 00:00:00
* src\_alpha2: it
* tgt\_alpha2: he
* prefer\_old: False
* short\_pair: it-he
* helsinki\_git\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
* transformers\_git\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
* port\_machine: URL
* port\_time: 2020-12-11-16:02
| [
"### it-he\n\n\n* source group: Italian\n* target group: Hebrew\n* OPUS readme: ita-heb\n* model: transformer\n* source language(s): ita\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.5, chr-F: 0.593",
"### System Info:\n\n\n* hf\\_name: it-he\n* source\\_languages: ita\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'he']\n* src\\_constituents: ('Italian', {'ita'})\n* tgt\\_constituents: ('Hebrew', {'heb'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: ita-heb\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: heb\n* chrF2\\_score: 0.593\n* bleu: 38.5\n* brevity\\_penalty: 0.985\n* ref\\_len: 9796.0\n* src\\_name: Italian\n* tgt\\_name: Hebrew\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: it\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* short\\_pair: it-he\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:02"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### it-he\n\n\n* source group: Italian\n* target group: Hebrew\n* OPUS readme: ita-heb\n* model: transformer\n* source language(s): ita\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.5, chr-F: 0.593",
"### System Info:\n\n\n* hf\\_name: it-he\n* source\\_languages: ita\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'he']\n* src\\_constituents: ('Italian', {'ita'})\n* tgt\\_constituents: ('Hebrew', {'heb'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: ita-heb\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: heb\n* chrF2\\_score: 0.593\n* bleu: 38.5\n* brevity\\_penalty: 0.985\n* ref\\_len: 9796.0\n* src\\_name: Italian\n* tgt\\_name: Hebrew\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: it\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* short\\_pair: it-he\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:02"
] | [
51,
131,
419
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### it-he\n\n\n* source group: Italian\n* target group: Hebrew\n* OPUS readme: ita-heb\n* model: transformer\n* source language(s): ita\n* target language(s): heb\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.5, chr-F: 0.593### System Info:\n\n\n* hf\\_name: it-he\n* source\\_languages: ita\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'he']\n* src\\_constituents: ('Italian', {'ita'})\n* tgt\\_constituents: ('Hebrew', {'heb'})\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* long\\_pair: ita-heb\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: heb\n* chrF2\\_score: 0.593\n* bleu: 38.5\n* brevity\\_penalty: 0.985\n* ref\\_len: 9796.0\n* src\\_name: Italian\n* tgt\\_name: Hebrew\n* train\\_date: 2020-12-10 00:00:00\n* src\\_alpha2: it\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* short\\_pair: it-he\n* helsinki\\_git\\_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96\n* transformers\\_git\\_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de\n* port\\_machine: URL\n* port\\_time: 2020-12-11-16:02"
] |
fill-mask | transformers | Thanks for checking this out! <br />
This video explains the ideas behind KerasBERT (still very much a work in progress)
https://www.youtube.com/watch?v=J3P8WLAELqk | {} | HenryAI/KerasBERTv1 | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us
| Thanks for checking this out! <br />
This video explains the ideas behind KerasBERT (still very much a work in progress)
URL | [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] | [
32
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-scitldr
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0232
- Rouge1: 35.2134
- Rouge2: 16.8919
- Rougel: 30.8442
- Rougelsum: 30.9316
- Gen Len: 18.7981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.0533 | 1.0 | 996 | 2.0285 | 34.9774 | 16.6163 | 30.6177 | 30.7038 | 18.7981 |
| 2.0994 | 2.0 | 1992 | 2.0232 | 35.2134 | 16.8919 | 30.8442 | 30.9316 | 18.7981 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "t5-base-finetuned-scitldr", "results": []}]} | HenryHXR/t5-base-finetuned-scitldr | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| t5-base-finetuned-scitldr
=========================
This model is a fine-tuned version of t5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0232
* Rouge1: 35.2134
* Rouge2: 16.8919
* Rougel: 30.8442
* Rougelsum: 30.9316
* Gen Len: 18.7981
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-06
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
54,
112,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification | transformers |
This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
checkpoint = "Herais/pred_genre"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_genre = {'涉案': 7, '都市': 10, '革命': 12, '农村': 4, '传奇': 0,
'其它': 2, '传记': 1, '青少': 11, '军旅': 3, '武打': 6,
'科幻': 9, '神话': 8, '宫廷': 5}
id2label_genre = {7: '涉案', 10: '都市', 12: '革命', 4: '农村', 0: '传奇',
2: '其它', 1: '传记', 11: '青少', 3: '军旅', 6: '武打',
9: '科幻', 8: '神话', 5: '宫廷'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
model.eval()
outputs = model(**input)
label_ids_pred = torch.argmax(outputs.logits, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['涉案']
Citation
TBA | {"language": ["zh"], "license": "apache-2.0", "tags": ["classification"], "datasets": ["Custom"], "metrics": ["rouge"]} | Herais/pred_genre | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"zh",
"dataset:Custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #bert #text-classification #classification #zh #dataset-Custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = URL("cuda" if URL.is_available() else "cpu")
checkpoint = "Herais/pred_genre"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_genre = {'涉案': 7, '都市': 10, '革命': 12, '农村': 4, '传奇': 0,
'其它': 2, '传记': 1, '青少': 11, '军旅': 3, '武打': 6,
'科幻': 9, '神话': 8, '宫廷': 5}
id2label_genre = {7: '涉案', 10: '都市', 12: '革命', 4: '农村', 0: '传奇',
2: '其它', 1: '传记', 11: '青少', 3: '军旅', 6: '武打',
9: '科幻', 8: '神话', 5: '宫廷'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
URL()
outputs = model(input)
label_ids_pred = URL(URL, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['涉案']
Citation
TBA | [
"#### Sample Usage\n from transformers import BertTokenizer, BertForSequenceClassification\n \n device = URL(\"cuda\" if URL.is_available() else \"cpu\")\n checkpoint = \"Herais/pred_genre\"\n tokenizer = BertTokenizer.from_pretrained(checkpoint, \n problem_type=\"single_label_classification\")\n model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)\n \n label2id_genre = {'涉案': 7, '都市': 10, '革命': 12, '农村': 4, '传奇': 0, \n '其它': 2, '传记': 1, '青少': 11, '军旅': 3, '武打': 6, \n '科幻': 9, '神话': 8, '宫廷': 5}\n\n id2label_genre = {7: '涉案', 10: '都市', 12: '革命', 4: '农村', 0: '传奇', \n 2: '其它', 1: '传记', 11: '青少', 3: '军旅', 6: '武打', \n 9: '科幻', 8: '神话', 5: '宫廷'}\n\n synopsis = \"\"\"加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\\\n 他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\\\n 成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\\\n 为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\\\n 也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\\\n 继续为检察事业贡献自己的青春。 \"\"\"\n \n inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')\n URL()\n outputs = model(input)\n \n label_ids_pred = URL(URL, dim=1).to('cpu').numpy()\n labels_pred = [id2label_timeperiod[label] for label in labels_pred]\n \n print(labels_pred)\n # ['涉案']\n \n Citation\n TBA"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #classification #zh #dataset-Custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Sample Usage\n from transformers import BertTokenizer, BertForSequenceClassification\n \n device = URL(\"cuda\" if URL.is_available() else \"cpu\")\n checkpoint = \"Herais/pred_genre\"\n tokenizer = BertTokenizer.from_pretrained(checkpoint, \n problem_type=\"single_label_classification\")\n model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)\n \n label2id_genre = {'涉案': 7, '都市': 10, '革命': 12, '农村': 4, '传奇': 0, \n '其它': 2, '传记': 1, '青少': 11, '军旅': 3, '武打': 6, \n '科幻': 9, '神话': 8, '宫廷': 5}\n\n id2label_genre = {7: '涉案', 10: '都市', 12: '革命', 4: '农村', 0: '传奇', \n 2: '其它', 1: '传记', 11: '青少', 3: '军旅', 6: '武打', \n 9: '科幻', 8: '神话', 5: '宫廷'}\n\n synopsis = \"\"\"加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\\\n 他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\\\n 成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\\\n 为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\\\n 也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\\\n 继续为检察事业贡献自己的青春。 \"\"\"\n \n inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')\n URL()\n outputs = model(input)\n \n label_ids_pred = URL(URL, dim=1).to('cpu').numpy()\n labels_pred = [id2label_timeperiod[label] for label in labels_pred]\n \n print(labels_pred)\n # ['涉案']\n \n Citation\n TBA"
] | [
46,
677
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #classification #zh #dataset-Custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n#### Sample Usage\n from transformers import BertTokenizer, BertForSequenceClassification\n \n device = URL(\"cuda\" if URL.is_available() else \"cpu\")\n checkpoint = \"Herais/pred_genre\"\n tokenizer = BertTokenizer.from_pretrained(checkpoint, \n problem_type=\"single_label_classification\")\n model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)\n \n label2id_genre = {'涉案': 7, '都市': 10, '革命': 12, '农村': 4, '传奇': 0, \n '其它': 2, '传记': 1, '青少': 11, '军旅': 3, '武打': 6, \n '科幻': 9, '神话': 8, '宫廷': 5}\n\n id2label_genre = {7: '涉案', 10: '都市', 12: '革命', 4: '农村', 0: '传奇', \n 2: '其它', 1: '传记', 11: '青少', 3: '军旅', 6: '武打', \n 9: '科幻', 8: '神话', 5: '宫廷'}\n\n synopsis = \"\"\"加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\\\n 他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\\\n 成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\\\n 为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\\\n 也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\\\n 继续为检察事业贡献自己的青春。 \"\"\"\n \n inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')\n URL()\n outputs = model(input)\n \n label_ids_pred = URL(URL, dim=1).to('cpu').numpy()\n labels_pred = [id2label_timeperiod[label] for label in labels_pred]\n \n print(labels_pred)\n # ['涉案']\n \n Citation\n TBA"
] |
text-classification | transformers | This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
checkpoint = "Herais/pred_timeperiod"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_timeperiod = {'古代': 0, '当代': 1, '现代': 2, '近代': 3, '重大': 4}
id2label_timeperiod = {0: '古代', 1: '当代', 2: '现代', 3: '近代', 4: '重大'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
model.eval()
outputs = model(**input)
label_ids_pred = torch.argmax(outputs.logits, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['当代']
Citation
{} | {"language": ["zh"], "license": "apache-2.0", "tags": ["classification"], "datasets": ["Custom"], "metrics": ["rouge"]} | Herais/pred_timeperiod | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"classification",
"zh",
"dataset:Custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #safetensors #bert #text-classification #classification #zh #dataset-Custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| This model predicts the time period given a synopsis of about 200 Chinese characters.
The model is trained on TV and Movie datasets and takes simplified Chinese as input.
We trained the model from the "hfl/chinese-bert-wwm-ext" checkpoint.
#### Sample Usage
from transformers import BertTokenizer, BertForSequenceClassification
device = URL("cuda" if URL.is_available() else "cpu")
checkpoint = "Herais/pred_timeperiod"
tokenizer = BertTokenizer.from_pretrained(checkpoint,
problem_type="single_label_classification")
model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)
label2id_timeperiod = {'古代': 0, '当代': 1, '现代': 2, '近代': 3, '重大': 4}
id2label_timeperiod = {0: '古代', 1: '当代', 2: '现代', 3: '近代', 4: '重大'}
synopsis = """加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\
他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\
成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\
为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\
也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\
继续为检察事业贡献自己的青春。 """
inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')
URL()
outputs = model(input)
label_ids_pred = URL(URL, dim=1).to('cpu').numpy()
labels_pred = [id2label_timeperiod[label] for label in labels_pred]
print(labels_pred)
# ['当代']
Citation
{} | [
"#### Sample Usage\n from transformers import BertTokenizer, BertForSequenceClassification\n \n device = URL(\"cuda\" if URL.is_available() else \"cpu\")\n checkpoint = \"Herais/pred_timeperiod\"\n tokenizer = BertTokenizer.from_pretrained(checkpoint, \n problem_type=\"single_label_classification\")\n model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)\n \n label2id_timeperiod = {'古代': 0, '当代': 1, '现代': 2, '近代': 3, '重大': 4}\n id2label_timeperiod = {0: '古代', 1: '当代', 2: '现代', 3: '近代', 4: '重大'}\n\n synopsis = \"\"\"加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\\\n 他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\\\n 成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\\\n 为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\\\n 也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\\\n 继续为检察事业贡献自己的青春。 \"\"\"\n \n inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')\n URL()\n outputs = model(input)\n \n label_ids_pred = URL(URL, dim=1).to('cpu').numpy()\n labels_pred = [id2label_timeperiod[label] for label in labels_pred]\n \n print(labels_pred)\n # ['当代']\n \n Citation\n{}"
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #classification #zh #dataset-Custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Sample Usage\n from transformers import BertTokenizer, BertForSequenceClassification\n \n device = URL(\"cuda\" if URL.is_available() else \"cpu\")\n checkpoint = \"Herais/pred_timeperiod\"\n tokenizer = BertTokenizer.from_pretrained(checkpoint, \n problem_type=\"single_label_classification\")\n model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)\n \n label2id_timeperiod = {'古代': 0, '当代': 1, '现代': 2, '近代': 3, '重大': 4}\n id2label_timeperiod = {0: '古代', 1: '当代', 2: '现代', 3: '近代', 4: '重大'}\n\n synopsis = \"\"\"加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\\\n 他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\\\n 成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\\\n 为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\\\n 也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\\\n 继续为检察事业贡献自己的青春。 \"\"\"\n \n inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')\n URL()\n outputs = model(input)\n \n label_ids_pred = URL(URL, dim=1).to('cpu').numpy()\n labels_pred = [id2label_timeperiod[label] for label in labels_pred]\n \n print(labels_pred)\n # ['当代']\n \n Citation\n{}"
] | [
50,
574
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #classification #zh #dataset-Custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n#### Sample Usage\n from transformers import BertTokenizer, BertForSequenceClassification\n \n device = URL(\"cuda\" if URL.is_available() else \"cpu\")\n checkpoint = \"Herais/pred_timeperiod\"\n tokenizer = BertTokenizer.from_pretrained(checkpoint, \n problem_type=\"single_label_classification\")\n model = BertForSequenceClassification.from_pretrained(checkpoint).to(device)\n \n label2id_timeperiod = {'古代': 0, '当代': 1, '现代': 2, '近代': 3, '重大': 4}\n id2label_timeperiod = {0: '古代', 1: '当代', 2: '现代', 3: '近代', 4: '重大'}\n\n synopsis = \"\"\"加油吧!检察官。鲤州市安平区检察院检察官助理蔡晓与徐美津是两个刚入职场的“菜鸟”。\\\n 他们在老检察官冯昆的指导与鼓励下,凭借着自己的一腔热血与对检察事业的执著追求,克服工作上的种种困难,\\\n 成功办理电竞赌博、虚假诉讼、水产市场涉黑等一系列复杂案件,惩治了犯罪分子,维护了人民群众的合法权益,\\\n 为社会主义法治建设贡献了自己的一份力量。在这个过程中,蔡晓与徐美津不仅得到了业务能力上的提升,\\\n 也领悟了人生的真谛,学会真诚地面对家人与朋友,收获了亲情与友谊,成长为合格的员额检察官,\\\n 继续为检察事业贡献自己的青春。 \"\"\"\n \n inputs = tokenizer(synopsis, truncation=True, max_length=512, return_tensors='pt')\n URL()\n outputs = model(input)\n \n label_ids_pred = URL(URL, dim=1).to('cpu').numpy()\n labels_pred = [id2label_timeperiod[label] for label in labels_pred]\n \n print(labels_pred)\n # ['当代']\n \n Citation\n{}"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# marian-finetuned-hi-hinglish
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.1869
- Validation Loss: 4.0607
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 279, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.1869 | 4.0607 | 0 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "marian-finetuned-hi-hinglish", "results": []}]} | Hetarth/marian-finetuned-hi-hinglish | null | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #marian #text2text-generation #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| marian-finetuned-hi-hinglish
============================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-hi-en on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 4.1869
* Validation Loss: 4.0607
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 5e-05, 'decay\_steps': 279, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.16.2
* TensorFlow 2.7.0
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5e-05, 'decay\\_steps': 279, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #tf #marian #text2text-generation #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5e-05, 'decay\\_steps': 279, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
46,
195,
5,
38
] | [
"TAGS\n#transformers #tf #marian #text2text-generation #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5e-05, 'decay\\_steps': 279, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation | transformers | Create README.md
## ByT5 Base Portuguese Product Reviews
#### Model Description
This is a finetuned version from ByT5 Base by Google for Sentimental Analysis from Product Reviews in Portuguese.
##### Paper: https://arxiv.org/abs/2105.13626
#### Training data
It was trained from products reviews from a Americanas.com. You can found the data here: https://github.com/HeyLucasLeao/finetuning-byt5-model.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.
##### Learning Rate: **1e-4**
##### Epochs: **1**
##### Colab for Finetuning: https://drive.google.com/file/d/17TcaN52moq7i7TE2EbcVbwQEQuAIQU63/view?usp=sharing
##### Colab for Metrics: https://colab.research.google.com/drive/1wbTDfOsE45UL8Q3ZD1_FTUmdVOKCcJFf#scrollTo=S4nuLkAFrlZ6
#### Score:
```python
Training Set:
'accuracy': 0.9019706922688226,
'f1': 0.9305820610687022,
'precision': 0.9596555965559656,
'recall': 0.9032183375781431
Test Set:
'accuracy': 0.9019409684035312,
'f1': 0.9303758732034697,
'precision': 0.9006660401258529,
'recall': 0.9621126145787866
Validation Set:
'accuracy': 0.9044948078526491,
'f1': 0.9321924443009364,
'precision': 0.9024426549173129,
'recall': 0.9639705531617191
```
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/byt5-base-pt-product-reviews")
model = AutoModelForSeq2SeqLM.from_pretrained("HeyLucasLeao/byt5-base-pt-product-reviews")
model.to(device)
def classificar_review(review):
inputs = tokenizer([review], padding='max_length', truncation=True, max_length=512, return_tensors='pt')
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
pred = np.argmax(output.cpu(), axis=1)
dici = {0: 'Review Negativo', 1: 'Review Positivo'}
return dici[pred.item()]
classificar_review(review)
``` | {} | HeyLucasLeao/byt5-base-pt-product-reviews | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2105.13626",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2105.13626"
] | [] | TAGS
#transformers #pytorch #t5 #text2text-generation #arxiv-2105.13626 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Create URL
## ByT5 Base Portuguese Product Reviews
#### Model Description
This is a finetuned version from ByT5 Base by Google for Sentimental Analysis from Product Reviews in Portuguese.
##### Paper: URL
#### Training data
It was trained from products reviews from a URL. You can found the data here: URL
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.
##### Learning Rate: 1e-4
##### Epochs: 1
##### Colab for Finetuning: URL
##### Colab for Metrics: URL
#### Score:
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
| [
"## ByT5 Base Portuguese Product Reviews",
"#### Model Description\nThis is a finetuned version from ByT5 Base by Google for Sentimental Analysis from Product Reviews in Portuguese.",
"##### Paper: URL",
"#### Training data\nIt was trained from products reviews from a URL. You can found the data here: URL",
"#### Training Procedure\nIt was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.",
"##### Learning Rate: 1e-4",
"##### Epochs: 1",
"##### Colab for Finetuning: URL",
"##### Colab for Metrics: URL",
"#### Score:",
"#### Goals\n\nMy true intention was totally educational, thus making available a this version of the model as a example for future proposes.\n\nHow to use"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #arxiv-2105.13626 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## ByT5 Base Portuguese Product Reviews",
"#### Model Description\nThis is a finetuned version from ByT5 Base by Google for Sentimental Analysis from Product Reviews in Portuguese.",
"##### Paper: URL",
"#### Training data\nIt was trained from products reviews from a URL. You can found the data here: URL",
"#### Training Procedure\nIt was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.",
"##### Learning Rate: 1e-4",
"##### Epochs: 1",
"##### Colab for Finetuning: URL",
"##### Colab for Metrics: URL",
"#### Score:",
"#### Goals\n\nMy true intention was totally educational, thus making available a this version of the model as a example for future proposes.\n\nHow to use"
] | [
47,
9,
29,
9,
26,
36,
12,
9,
14,
13,
6,
31
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #arxiv-2105.13626 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n## ByT5 Base Portuguese Product Reviews#### Model Description\nThis is a finetuned version from ByT5 Base by Google for Sentimental Analysis from Product Reviews in Portuguese.##### Paper: URL#### Training data\nIt was trained from products reviews from a URL. You can found the data here: URL#### Training Procedure\nIt was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.##### Learning Rate: 1e-4##### Epochs: 1##### Colab for Finetuning: URL##### Colab for Metrics: URL#### Score:#### Goals\n\nMy true intention was totally educational, thus making available a this version of the model as a example for future proposes.\n\nHow to use"
] |
text2text-generation | transformers | Create README.md
## ByT5 Small Portuguese Product Reviews
#### Model Description
This is a finetuned version from ByT5 Small by Google for Sentimental Analysis from Product Reviews in Portuguese.
##### Paper: https://arxiv.org/abs/2105.13626
#### Training data
It was trained from products reviews from a Americanas.com. You can found the data here: https://github.com/HeyLucasLeao/finetuning-byt5-model.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.
##### Learning Rate: **1e-4**
##### Epochs: **1**
##### Colab for Finetuning: https://colab.research.google.com/drive/1EChTeQkGeXi_52lClBNazHVuSNKEHN2f
##### Colab for Metrics: https://colab.research.google.com/drive/1o4tcsP3lpr1TobtE3Txhp9fllxPWXxlw#scrollTo=PXAoog5vQaTn
#### Score:
```python
Training Set:
'accuracy': 0.8974239585927603,
'f1': 0.927229848590765,
'precision': 0.9580290812115055,
'recall': 0.8983492356469835
Test Set:
'accuracy': 0.8957881282882026,
'f1': 0.9261366030421776,
'precision': 0.9559431131213848,
'recall': 0.8981326359661668
Validation Set:
'accuracy': 0.8925383190163382,
'f1': 0.9239208204149773,
'precision': 0.9525448733710351,
'recall': 0.8969668904839083
```
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/byt5-small-pt-product-reviews")
model = AutoModelForSeq2SeqLM.from_pretrained("HeyLucasLeao/byt5-small-pt-product-reviews")
model.to(device)
def classificar_review(review):
inputs = tokenizer([review], padding='max_length', truncation=True, max_length=512, return_tensors='pt')
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
pred = np.argmax(output.cpu(), axis=1)
dici = {0: 'Review Negativo', 1: 'Review Positivo'}
return dici[pred.item()]
classificar_review(review)
``` | {} | HeyLucasLeao/byt5-small-pt-product-reviews | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2105.13626",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2105.13626"
] | [] | TAGS
#transformers #pytorch #t5 #text2text-generation #arxiv-2105.13626 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Create URL
## ByT5 Small Portuguese Product Reviews
#### Model Description
This is a finetuned version from ByT5 Small by Google for Sentimental Analysis from Product Reviews in Portuguese.
##### Paper: URL
#### Training data
It was trained from products reviews from a URL. You can found the data here: URL
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.
##### Learning Rate: 1e-4
##### Epochs: 1
##### Colab for Finetuning: URL
##### Colab for Metrics: URL
#### Score:
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
| [
"## ByT5 Small Portuguese Product Reviews",
"#### Model Description\nThis is a finetuned version from ByT5 Small by Google for Sentimental Analysis from Product Reviews in Portuguese.",
"##### Paper: URL",
"#### Training data\nIt was trained from products reviews from a URL. You can found the data here: URL",
"#### Training Procedure\nIt was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.",
"##### Learning Rate: 1e-4",
"##### Epochs: 1",
"##### Colab for Finetuning: URL",
"##### Colab for Metrics: URL",
"#### Score:",
"#### Goals\n\nMy true intention was totally educational, thus making available a this version of the model as a example for future proposes.\n\nHow to use"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #arxiv-2105.13626 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## ByT5 Small Portuguese Product Reviews",
"#### Model Description\nThis is a finetuned version from ByT5 Small by Google for Sentimental Analysis from Product Reviews in Portuguese.",
"##### Paper: URL",
"#### Training data\nIt was trained from products reviews from a URL. You can found the data here: URL",
"#### Training Procedure\nIt was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.",
"##### Learning Rate: 1e-4",
"##### Epochs: 1",
"##### Colab for Finetuning: URL",
"##### Colab for Metrics: URL",
"#### Score:",
"#### Goals\n\nMy true intention was totally educational, thus making available a this version of the model as a example for future proposes.\n\nHow to use"
] | [
47,
9,
29,
9,
26,
36,
12,
9,
14,
13,
6,
31
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #arxiv-2105.13626 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n## ByT5 Small Portuguese Product Reviews#### Model Description\nThis is a finetuned version from ByT5 Small by Google for Sentimental Analysis from Product Reviews in Portuguese.##### Paper: URL#### Training data\nIt was trained from products reviews from a URL. You can found the data here: URL#### Training Procedure\nIt was finetuned using the Trainer Class available on the Hugging Face library. For evaluation it was used accuracy, precision, recall and f1 score.##### Learning Rate: 1e-4##### Epochs: 1##### Colab for Finetuning: URL##### Colab for Metrics: URL#### Score:#### Goals\n\nMy true intention was totally educational, thus making available a this version of the model as a example for future proposes.\n\nHow to use"
] |
text-generation | transformers | Create README.md
## Emo Bot
#### Model Description
This is a finetuned version from GPT-Neo-125M for Generating Music Lyrics by Emo Genre.
#### Training data
It was trained with 2381 songs by 15 bands that were important to emo culture in the early 2000s, not necessary directly playing on the genre.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library.
##### Learning Rate: **2e-4**
##### Epochs: **40**
##### Colab for Finetuning: https://colab.research.google.com/drive/1jwTYI1AygQf7FV9vCHTWA4Gf5i--sjsD?usp=sharing
##### Colab for Testing: https://colab.research.google.com/drive/1wSP4Wyr1-DTTNQbQps_RCO3ThhH-eeZc?usp=sharing
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForCausalLM
import re
if torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print(device)
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/gpt-neo-small-emo-lyrics")
model = AutoModelForCausalLM.from_pretrained("HeyLucasLeao/gpt-neo-small-emo-lyrics")
model.to('cuda')
generated = tokenizer('I miss you',return_tensors='pt').input_ids.cuda()
#Generating texts
sample_outputs = model.generate(generated,
# Use sampling instead of greedy decoding
do_sample=True,
# Keep only top 3 token with the highest probability
top_k=10,
# Maximum sequence length
max_length=200,
# Keep only the most probable tokens with cumulative probability of 95%
top_p=0.95,
# Changes randomness of generated sequences
temperature=2.,
# Number of sequences to generate
num_return_sequences=3)
# Decoding and printing sequences
for i, sample_output in enumerate(sample_outputs):
texto = tokenizer.decode(sample_output.tolist())
regex_padding = re.sub('<|pad|>', '', texto)
regex_barra = re.sub('[|+]', '', regex_padding)
espaço = re.sub('[ +]', ' ', regex_barra)
resultado = re.sub('[\n](2, )', '\n', espaço)
print(">> Text {}: {}".format(i+1, resultado + '\n'))
""">> Texto 1: I miss you
I miss you more than anything
And if you change your mind
I do it like a change of mind
I always do it like theeah
Everybody wants a surprise
Everybody needs to stay collected
I keep your locked and numbered
Use this instead: Run like the wind
Use this instead: Run like the sun
And come back down: You've been replaced
Don't want to be the same
Tomorrow
I don't even need your name
The message is on the way
make it while you're holding on
It's better than it is
Everything more security than a parade
Im getting security
angs the world like a damned soul
We're hanging on a queue
and the truth is on the way
Are you listening?
We're getting security
Send me your soldiers
We're getting blood on"""
""">> Texto 2: I miss you
And I could forget your name
All the words we'd hear
You miss me
I need you
And I need you
You were all by my side
When we'd talk to no one
And I
Just to talk to you
It's easier than it has to be
Except for you
You missed my know-all
You meant to hug me
And I
Just want to feel you touch me
We'll work up
Something wild, just from the inside
Just get closer to me
I need you
You were all by my side
When we*d talk to you
, you better admit
That I'm too broken to be small
You're part of me
And I need you
But I
Don't know how
But I know I need you
Must"""
""">> Texto 3: I miss you
And I can't lie
Inside my head
All the hours you've been through
If I could change your mind
I would give it all away
And I'd give it all away
Just to give it away
To you
Now I wish that I could change
Just to you
I miss you so much
If I could change
So much
I'm looking down
At the road
The one that's already been
Searching for a better way to go
So much I need to see it clear
topk wish me an ehive
I wish I wish I wish I knew
I can give well
In this lonely night
The lonely night
I miss you
I wish it well
If I could change
So much
I need you"""
``` | {} | HeyLucasLeao/gpt-neo-small-emo-lyrics | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
| Create URL
## Emo Bot
#### Model Description
This is a finetuned version from GPT-Neo-125M for Generating Music Lyrics by Emo Genre.
#### Training data
It was trained with 2381 songs by 15 bands that were important to emo culture in the early 2000s, not necessary directly playing on the genre.
#### Training Procedure
It was finetuned using the Trainer Class available on the Hugging Face library.
##### Learning Rate: 2e-4
##### Epochs: 40
##### Colab for Finetuning: URL
##### Colab for Testing: URL
#### Goals
My true intention was totally educational, thus making available a this version of the model as a example for future proposes.
How to use
| [
"## Emo Bot",
"#### Model Description\nThis is a finetuned version from GPT-Neo-125M for Generating Music Lyrics by Emo Genre.",
"#### Training data\nIt was trained with 2381 songs by 15 bands that were important to emo culture in the early 2000s, not necessary directly playing on the genre.",
"#### Training Procedure\nIt was finetuned using the Trainer Class available on the Hugging Face library.",
"##### Learning Rate: 2e-4",
"##### Epochs: 40",
"##### Colab for Finetuning: URL",
"##### Colab for Testing: URL",
"#### Goals\n\nMy true intention was totally educational, thus making available a this version of the model as a example for future proposes.\n\nHow to use"
] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Emo Bot",
"#### Model Description\nThis is a finetuned version from GPT-Neo-125M for Generating Music Lyrics by Emo Genre.",
"#### Training data\nIt was trained with 2381 songs by 15 bands that were important to emo culture in the early 2000s, not necessary directly playing on the genre.",
"#### Training Procedure\nIt was finetuned using the Trainer Class available on the Hugging Face library.",
"##### Learning Rate: 2e-4",
"##### Epochs: 40",
"##### Colab for Finetuning: URL",
"##### Colab for Testing: URL",
"#### Goals\n\nMy true intention was totally educational, thus making available a this version of the model as a example for future proposes.\n\nHow to use"
] | [
35,
5,
30,
36,
22,
12,
9,
14,
12,
31
] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n## Emo Bot#### Model Description\nThis is a finetuned version from GPT-Neo-125M for Generating Music Lyrics by Emo Genre.#### Training data\nIt was trained with 2381 songs by 15 bands that were important to emo culture in the early 2000s, not necessary directly playing on the genre.#### Training Procedure\nIt was finetuned using the Trainer Class available on the Hugging Face library.##### Learning Rate: 2e-4##### Epochs: 40##### Colab for Finetuning: URL##### Colab for Testing: URL#### Goals\n\nMy true intention was totally educational, thus making available a this version of the model as a example for future proposes.\n\nHow to use"
] |
text-generation | transformers | ## GPT-Neo Small Portuguese
#### Model Description
This is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language.
#### Training data
It was trained from 227,382 selected texts from a PTWiki Dump. You can found all the data from here: https://archive.org/details/ptwiki-dump-20210520
#### Training Procedure
Every text was passed through a GPT2-Tokenizer with bos and eos tokens to separate them, with max sequence length that the GPT-Neo could support. It was finetuned using the default metrics of the Trainer Class, available on the Hugging Face library.
##### Learning Rate: **2e-4**
##### Epochs: **1**
#### Goals
My true intention was totally educational, thus making available a Portuguese version of this model.
How to use
``` python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("HeyLucasLeao/gpt-neo-small-portuguese")
model = AutoModelForCausalLM.from_pretrained("HeyLucasLeao/gpt-neo-small-portuguese")
text = 'eu amo o brasil.'
generated = tokenizer(f'<|startoftext|> {text}',
return_tensors='pt').input_ids.cuda()
#Generating texts
sample_outputs = model.generate(generated,
# Use sampling instead of greedy decoding
do_sample=True,
# Keep only top 3 token with the highest probability
top_k=3,
# Maximum sequence length
max_length=200,
# Keep only the most probable tokens with cumulative probability of 95%
top_p=0.95,
# Changes randomness of generated sequences
temperature=1.9,
# Number of sequences to generate
num_return_sequences=3)
# Decoding and printing sequences
for i, sample_output in enumerate(sample_outputs):
print(">> Generated text {}\\\\
\\\\
{}".format(i+1, tokenizer.decode(sample_output.tolist())))
# >> Generated text
#Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
#>> Generated text 1
#<|startoftext|> eu amo o brasil. O termo foi usado por alguns autores como uma forma de designar a formação do poder político do Brasil. A partir da década de 1960, o termo passou a ser usado para designar a formação política do Brasil. A partir de meados da década de 1970 e até o inicio dos anos 2000, o termo foi aplicado à formação político-administrativo do país, sendo utilizado por alguns autores como uma expressão de "política de direita". História Antecedentes O termo "político-administrário" foi usado pela primeira vez em 1891 por um gru
#>> Generated text 2
#<|startoftext|> eu amo o brasil. É uma das muitas pessoas do mundo, ao contrário da maioria das pessoas, que são chamados de "pessoas do Brasil", que são chamados de "brincos do país" e que têm uma carreira de mais de um século. O termo "brincal de ouro" é usado em referências às pessoas que vivem no Brasil, e que são chamados "brincos do país", que são "cidade" e que vivem na cidade de Nova York e que vive em um país onde a maior parte das pessoas são chamados de "cidades". Hist
#>> Generated text 3
#<|startoftext|> eu amo o brasil. É uma expressão que se refere ao uso de um instrumento musical em particular para se referir à qualidade musical, o que é uma expressão da qualidade da qualidade musical de uma pessoa. A expressão "amor" (em inglês, amo), é a expressão que pode ser usada com o intuito empregado em qualquer situação em que a vontade de uma pessoa de se sentir amado ou amoroso é mais do que um desejo de uma vontade. Em geral, a expressão "amoro" (do inglês, amo) pode também se referir tanto a uma pessoa como um instrumento de cordas ou de uma
``` | {} | HeyLucasLeao/gpt-neo-small-portuguese | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us
| ## GPT-Neo Small Portuguese
#### Model Description
This is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language.
#### Training data
It was trained from 227,382 selected texts from a PTWiki Dump. You can found all the data from here: URL
#### Training Procedure
Every text was passed through a GPT2-Tokenizer with bos and eos tokens to separate them, with max sequence length that the GPT-Neo could support. It was finetuned using the default metrics of the Trainer Class, available on the Hugging Face library.
##### Learning Rate: 2e-4
##### Epochs: 1
#### Goals
My true intention was totally educational, thus making available a Portuguese version of this model.
How to use
| [
"## GPT-Neo Small Portuguese",
"#### Model Description\nThis is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language.",
"#### Training data\nIt was trained from 227,382 selected texts from a PTWiki Dump. You can found all the data from here: URL",
"#### Training Procedure\nEvery text was passed through a GPT2-Tokenizer with bos and eos tokens to separate them, with max sequence length that the GPT-Neo could support. It was finetuned using the default metrics of the Trainer Class, available on the Hugging Face library.",
"##### Learning Rate: 2e-4",
"##### Epochs: 1",
"#### Goals\n\nMy true intention was totally educational, thus making available a Portuguese version of this model.\n\nHow to use"
] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"## GPT-Neo Small Portuguese",
"#### Model Description\nThis is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language.",
"#### Training data\nIt was trained from 227,382 selected texts from a PTWiki Dump. You can found all the data from here: URL",
"#### Training Procedure\nEvery text was passed through a GPT2-Tokenizer with bos and eos tokens to separate them, with max sequence length that the GPT-Neo could support. It was finetuned using the default metrics of the Trainer Class, available on the Hugging Face library.",
"##### Learning Rate: 2e-4",
"##### Epochs: 1",
"#### Goals\n\nMy true intention was totally educational, thus making available a Portuguese version of this model.\n\nHow to use"
] | [
31,
8,
29,
34,
65,
12,
9,
25
] | [
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #region-us \n## GPT-Neo Small Portuguese#### Model Description\nThis is a finetuned version from GPT-Neo 125M by EletheurAI to Portuguese language.#### Training data\nIt was trained from 227,382 selected texts from a PTWiki Dump. You can found all the data from here: URL#### Training Procedure\nEvery text was passed through a GPT2-Tokenizer with bos and eos tokens to separate them, with max sequence length that the GPT-Neo could support. It was finetuned using the default metrics of the Trainer Class, available on the Hugging Face library.##### Learning Rate: 2e-4##### Epochs: 1#### Goals\n\nMy true intention was totally educational, thus making available a Portuguese version of this model.\n\nHow to use"
] |
null | null | # Convert Fairseq Wav2Vec2 to HF
This repo has two scripts that can show how to convert a fairseq checkpoint to HF Transformers.
It's important to always check in a forward pass that the two checkpoints are the same. The procedure should be as follows:
1. Download original model
2. Create HF version of the model:
```
huggingface-cli repo create <name_of_model> --organization <org_of_model>
git clone https://huggingface.co/<org_of_model>/<name_of_model>
```
3. Convert the model
```
./run_convert.sh <name_of_model> <path/to/orig/checkpoint/> 0
```
The "0" means that checkpoint is **not** a fine-tuned one.
4. Verify that models are equal:
```
./run_forward.py <name_of_model> <path/to/orig/checkpoint/> 0
```
Check the scripts to better understand how they work or contact https://huggingface.co/patrickvonplaten | {} | HfSpeechUtils/convert_wav2vec2_to_hf | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| # Convert Fairseq Wav2Vec2 to HF
This repo has two scripts that can show how to convert a fairseq checkpoint to HF Transformers.
It's important to always check in a forward pass that the two checkpoints are the same. The procedure should be as follows:
1. Download original model
2. Create HF version of the model:
3. Convert the model
The "0" means that checkpoint is not a fine-tuned one.
4. Verify that models are equal:
Check the scripts to better understand how they work or contact URL | [
"# Convert Fairseq Wav2Vec2 to HF\n\nThis repo has two scripts that can show how to convert a fairseq checkpoint to HF Transformers.\n\nIt's important to always check in a forward pass that the two checkpoints are the same. The procedure should be as follows:\n\n1. Download original model\n2. Create HF version of the model:\n\n3. Convert the model\n\nThe \"0\" means that checkpoint is not a fine-tuned one.\n4. Verify that models are equal:\n\n\nCheck the scripts to better understand how they work or contact URL"
] | [
"TAGS\n#region-us \n",
"# Convert Fairseq Wav2Vec2 to HF\n\nThis repo has two scripts that can show how to convert a fairseq checkpoint to HF Transformers.\n\nIt's important to always check in a forward pass that the two checkpoints are the same. The procedure should be as follows:\n\n1. Download original model\n2. Create HF version of the model:\n\n3. Convert the model\n\nThe \"0\" means that checkpoint is not a fine-tuned one.\n4. Verify that models are equal:\n\n\nCheck the scripts to better understand how they work or contact URL"
] | [
5,
119
] | [
"TAGS\n#region-us \n# Convert Fairseq Wav2Vec2 to HF\n\nThis repo has two scripts that can show how to convert a fairseq checkpoint to HF Transformers.\n\nIt's important to always check in a forward pass that the two checkpoints are the same. The procedure should be as follows:\n\n1. Download original model\n2. Create HF version of the model:\n\n3. Convert the model\n\nThe \"0\" means that checkpoint is not a fine-tuned one.\n4. Verify that models are equal:\n\n\nCheck the scripts to better understand how they work or contact URL"
] |
null | null | # Run any CTC model
```python
./run_ctc_model.py "yourModelId" "yourLanguageCode" "yourPhonemeLang" "NumSamplesToDecode"
```
| {} | HfSpeechUtils/run_ctc_common_voice.py | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| # Run any CTC model
| [
"# Run any CTC model"
] | [
"TAGS\n#region-us \n",
"# Run any CTC model"
] | [
5,
6
] | [
"TAGS\n#region-us \n# Run any CTC model"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8301
- Matthews Correlation: 0.5481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5252 | 1.0 | 535 | 0.5094 | 0.4268 |
| 0.3515 | 2.0 | 1070 | 0.5040 | 0.4948 |
| 0.2403 | 3.0 | 1605 | 0.5869 | 0.5449 |
| 0.1731 | 4.0 | 2140 | 0.7338 | 0.5474 |
| 0.1219 | 5.0 | 2675 | 0.8301 | 0.5481 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.5481326292844919}}]}]} | Hinova/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8301
* Matthews Correlation: 0.5481
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
49,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2176 | 1.0 | 5533 | 1.1429 |
| 0.9425 | 2.0 | 11066 | 1.1196 |
| 0.7586 | 3.0 | 16599 | 1.1582 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"]} | Hoang/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1582
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.10.0
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
47,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text-generation | transformers | KOD file | {} | HoeioUser/kod | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| KOD file | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
36
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
token-classification | transformers | Testing NER | {} | Holako/NER_CAMELBERT | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
| Testing NER | [] | [
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification | transformers |
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Holako/NER_model_holako")
model = AutoModelForTokenClassification.from_pretrained("Holako/NER_model_holako")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "اسمي احمد"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
=======
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
Language|Dataset
-|-
Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/)
| {} | Holako/NER_model_holako | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #xlm-roberta #token-classification #autotrain_compatible #endpoints_compatible #region-us
| #### How to use
You can use this model with Transformers *pipeline* for NER.
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
=======
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
Training data
-------------
| [
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.\n\n\n=======",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.\n\n\nTraining data\n-------------"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #token-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.\n\n\n=======",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.\n\n\nTraining data\n-------------"
] | [
31,
21,
52,
60
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #token-classification #autotrain_compatible #endpoints_compatible #region-us \n#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.\n\n\n=======#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.\n\n\nTraining data\n-------------"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | MagnusChase7/DialoGPT-medium-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] |
token-classification | transformers |
# AlbertNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Albert | 0.993405 | 0.938907 | 0.943966 | 0.941429 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.820639 | 0.820639 | 0.820639 |
| EVE | 256 | 0.936803 | 0.984375 | 0.960000 |
| FAC | 248 | 0.925373 | 1.000000 | 0.961240 |
| LOC | 2884 | 0.960818 | 0.960818 | 0.960818 |
| MON | 98 | 0.913978 | 0.867347 | 0.890052 |
| ORG | 3216 | 0.920892 | 0.937500 | 0.929122 |
| PCT | 94 | 0.946809 | 0.946809 | 0.946809 |
| PER | 2644 | 0.960000 | 0.944024 | 0.951945 |
| PRO | 318 | 0.942943 | 0.987421 | 0.964670 |
| TIM | 43 | 0.780488 | 0.744186 | 0.761905 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install sentencepiece
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/albert-fa-zwnj-base-v2-ner" # Albert
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. | {"language": "fa"} | HooshvareLab/albert-fa-zwnj-base-v2-ner | null | [
"transformers",
"pytorch",
"tf",
"albert",
"token-classification",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #albert #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us
| AlbertNER
=========
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from ARMAN, PEYMA, and WikiANN that covered ten types of entities:
* Date (DAT)
* Event (EVE)
* Facility (FAC)
* Location (LOC)
* Money (MON)
* Organization (ORG)
* Percent (PCT)
* Person (PER)
* Product (PRO)
* Time (TIM)
Dataset Information
-------------------
Evaluation
----------
The following tables summarize the scores obtained by model overall and per each class.
Overall
Per entities
How To Use
----------
You use this model with Transformers pipeline for NER.
### Installing requirements
### How to predict using pipeline
Questions?
----------
Post a Github issue on the ParsNER Issues repo.
| [
"### Installing requirements",
"### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #albert #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us \n",
"### Installing requirements",
"### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] | [
33,
5,
34
] | [
"TAGS\n#transformers #pytorch #tf #albert #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us \n### Installing requirements### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] |
fill-mask | transformers |
# ALBERT-Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
> Call it little_berty
### BibTeX entry and citation info
Please cite in your publication as the following:
```bibtex
@misc{ALBERTPersian,
author = {Hooshvare Team},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/albert-fa-zwnj-base-v2 | null | [
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #albert #fill-mask #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# ALBERT-Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
> Call it little_berty
### BibTeX entry and citation info
Please cite in your publication as the following:
## Questions?
Post a Github issue on the ALBERT-Persian repo. | [
"# ALBERT-Persian\n\nA Lite BERT for Self-supervised Learning of Language Representations for the Persian Language\n\n> میتونی بهش بگی برت_کوچولو\n\n> Call it little_berty",
"### BibTeX entry and citation info\n\nPlease cite in your publication as the following:",
"## Questions?\nPost a Github issue on the ALBERT-Persian repo."
] | [
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# ALBERT-Persian\n\nA Lite BERT for Self-supervised Learning of Language Representations for the Persian Language\n\n> میتونی بهش بگی برت_کوچولو\n\n> Call it little_berty",
"### BibTeX entry and citation info\n\nPlease cite in your publication as the following:",
"## Questions?\nPost a Github issue on the ALBERT-Persian repo."
] | [
41,
50,
19,
18
] | [
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# ALBERT-Persian\n\nA Lite BERT for Self-supervised Learning of Language Representations for the Persian Language\n\n> میتونی بهش بگی برت_کوچولو\n\n> Call it little_berty### BibTeX entry and citation info\n\nPlease cite in your publication as the following:## Questions?\nPost a Github issue on the ALBERT-Persian repo."
] |
token-classification | transformers |
## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
## Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
| Label | # |
|:------------:|:-----:|
| Organization | 30108 |
| Location | 12924 |
| Facility | 4458 |
| Event | 7557 |
| Product | 4389 |
| Person | 15645 |
**Download**
You can download the dataset from [here](https://github.com/HaniehP/PersianNER)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|---------|----------|------------|--------------|----------|----------------|------------|
| ARMAN | 93.10* | 89.9 | 84.03 | 86.55 | - | 77.45 |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
+ And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: [Linkedin](https://www.linkedin.com/in/sara-tabrizi-64548b79/), [Behance](https://www.behance.net/saratabrizi), [Instagram](https://www.instagram.com/sara_b_tabrizi/)
## Releases
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-base-parsbert-armanner-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2005.12515"
] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ParsBERT: Transformer-based Model for Persian Language Understanding
--------------------------------------------------------------------
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: arXiv:2005.12515
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
---------------------------------------
This task aims to extract named entities in the text, such as names and label with appropriate 'NER' classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with 'IOB' format. In this format, tokens that are not part of an entity are tagged as '”O”' the '”B”'tag corresponds to the first word of an object, and the '”I”' tag corresponds to the rest of the terms of the same entity. Both '”B”' and '”I”' tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, 'ARMAN', and 'PEYMA'. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
Download
You can download the dataset from here
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
Cite
----
Please cite the following paper in your publication if you are using ParsBERT in your research:
Acknowledgments
---------------
We hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.
Contributors
------------
* Mehrdad Farahani: Linkedin, Twitter, Github
* Mohammad Gharachorloo: Linkedin, Twitter, Github
* Marzieh Farahani: Linkedin, Twitter, Github
* Mohammad Manthouri: Linkedin, Twitter, Github
* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram
* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram
Releases
--------
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| [
"### ARMAN\n\n\nARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.\n\n\n1. Organization\n2. Location\n3. Facility\n4. Event\n5. Product\n6. Person\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------\n\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\n* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram\n\n\nReleases\n--------",
"### Release v0.1 (May 29, 2019)\n\n\nThis is the first version of our ParsBERT NER!"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### ARMAN\n\n\nARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.\n\n\n1. Organization\n2. Location\n3. Facility\n4. Event\n5. Product\n6. Person\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------\n\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\n* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram\n\n\nReleases\n--------",
"### Release v0.1 (May 29, 2019)\n\n\nThis is the first version of our ParsBERT NER!"
] | [
52,
336,
27
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### ARMAN\n\n\nARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.\n\n\n1. Organization\n2. Location\n3. Facility\n4. Event\n5. Product\n6. Person\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------\n\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\n* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram\n\n\nReleases\n--------### Release v0.1 (May 29, 2019)\n\n\nThis is the first version of our ParsBERT NER!"
] |
token-classification | transformers |
## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
## Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
| Label | # |
|:------------:|:-----:|
| Organization | 16964 |
| Money | 2037 |
| Location | 8782 |
| Date | 4259 |
| Time | 732 |
| Person | 7675 |
| Percent | 699 |
**Download**
You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/)
---
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
| Label | # |
|:------------:|:-----:|
| Organization | 30108 |
| Location | 12924 |
| Facility | 4458 |
| Event | 7557 |
| Product | 4389 |
| Person | 15645 |
**Download**
You can download the dataset from [here](https://github.com/HaniehP/PersianNER)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:---------------:|:--------:|:----------:|:--------------:|:----------:|:----------------:|:------------:|
| ARMAN + PEYMA | 95.13* | - | - | - | - | - |
| PEYMA | 98.79* | - | 90.59 | - | 84.00 | - |
| ARMAN | 93.10* | 89.9 | 84.03 | 86.55 | - | 77.45 |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
+ And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: [Linkedin](https://www.linkedin.com/in/sara-tabrizi-64548b79/), [Behance](https://www.behance.net/saratabrizi), [Instagram](https://www.instagram.com/sara_b_tabrizi/)
## Releases
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-base-parsbert-ner-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2005.12515"
] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ParsBERT: Transformer-based Model for Persian Language Understanding
--------------------------------------------------------------------
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: arXiv:2005.12515
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
---------------------------------------
This task aims to extract named entities in the text, such as names and label with appropriate 'NER' classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with 'IOB' format. In this format, tokens that are not part of an entity are tagged as '”O”' the '”B”'tag corresponds to the first word of an object, and the '”I”' tag corresponds to the rest of the terms of the same entity. Both '”B”' and '”I”' tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, 'ARMAN', and 'PEYMA'. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
Download
You can download the dataset from here
---
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
Download
You can download the dataset from here
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
Cite
----
Please cite the following paper in your publication if you are using ParsBERT in your research:
Acknowledgments
---------------
We hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.
Contributors
------------
* Mehrdad Farahani: Linkedin, Twitter, Github
* Mohammad Gharachorloo: Linkedin, Twitter, Github
* Marzieh Farahani: Linkedin, Twitter, Github
* Mohammad Manthouri: Linkedin, Twitter, Github
* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram
* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram
Releases
--------
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| [
"### PEYMA\n\n\nPEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.\n\n\n1. Organization\n2. Money\n3. Location\n4. Date\n5. Time\n6. Person\n7. Percent\n\n\n\nDownload\nYou can download the dataset from here\n\n\n\n\n---",
"### ARMAN\n\n\nARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.\n\n\n1. Organization\n2. Location\n3. Facility\n4. Event\n5. Product\n6. Person\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------\n\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\n* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram\n\n\nReleases\n--------",
"### Release v0.1 (May 29, 2019)\n\n\nThis is the first version of our ParsBERT NER!"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### PEYMA\n\n\nPEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.\n\n\n1. Organization\n2. Money\n3. Location\n4. Date\n5. Time\n6. Person\n7. Percent\n\n\n\nDownload\nYou can download the dataset from here\n\n\n\n\n---",
"### ARMAN\n\n\nARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.\n\n\n1. Organization\n2. Location\n3. Facility\n4. Event\n5. Product\n6. Person\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------\n\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\n* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram\n\n\nReleases\n--------",
"### Release v0.1 (May 29, 2019)\n\n\nThis is the first version of our ParsBERT NER!"
] | [
56,
72,
336,
27
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### PEYMA\n\n\nPEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.\n\n\n1. Organization\n2. Money\n3. Location\n4. Date\n5. Time\n6. Person\n7. Percent\n\n\n\nDownload\nYou can download the dataset from here\n\n\n\n\n---### ARMAN\n\n\nARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.\n\n\n1. Organization\n2. Location\n3. Facility\n4. Event\n5. Product\n6. Person\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------\n\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\n* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram\n\n\nReleases\n--------### Release v0.1 (May 29, 2019)\n\n\nThis is the first version of our ParsBERT NER!"
] |
token-classification | transformers |
## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
## Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
| Label | # |
|:------------:|:-----:|
| Organization | 16964 |
| Money | 2037 |
| Location | 8782 |
| Date | 4259 |
| Time | 732 |
| Person | 7675 |
| Percent | 699 |
**Download**
You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|---------|----------|------------|--------------|----------|----------------|------------|
| PEYMA | 98.79* | - | 90.59 | - | 84.00 | - |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
+ And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: [Linkedin](https://www.linkedin.com/in/sara-tabrizi-64548b79/), [Behance](https://www.behance.net/saratabrizi), [Instagram](https://www.instagram.com/sara_b_tabrizi/)
## Releases
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-base-parsbert-peymaner-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2005.12515"
] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ParsBERT: Transformer-based Model for Persian Language Understanding
--------------------------------------------------------------------
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: arXiv:2005.12515
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
Persian NER [ARMAN, PEYMA, ARMAN+PEYMA]
---------------------------------------
This task aims to extract named entities in the text, such as names and label with appropriate 'NER' classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with 'IOB' format. In this format, tokens that are not part of an entity are tagged as '”O”' the '”B”'tag corresponds to the first word of an object, and the '”I”' tag corresponds to the rest of the terms of the same entity. Both '”B”' and '”I”' tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, 'ARMAN', and 'PEYMA'. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
Download
You can download the dataset from here
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
Cite
----
Please cite the following paper in your publication if you are using ParsBERT in your research:
Acknowledgments
---------------
We hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.
Contributors
------------
* Mehrdad Farahani: Linkedin, Twitter, Github
* Mohammad Gharachorloo: Linkedin, Twitter, Github
* Marzieh Farahani: Linkedin, Twitter, Github
* Mohammad Manthouri: Linkedin, Twitter, Github
* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram
* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram
Releases
--------
### Release v0.1 (May 29, 2019)
This is the first version of our ParsBERT NER!
| [
"### PEYMA\n\n\nPEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.\n\n\n1. Organization\n2. Money\n3. Location\n4. Date\n5. Time\n6. Person\n7. Percent\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------\n\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\n* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram\n\n\nReleases\n--------",
"### Release v0.1 (May 29, 2019)\n\n\nThis is the first version of our ParsBERT NER!"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### PEYMA\n\n\nPEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.\n\n\n1. Organization\n2. Money\n3. Location\n4. Date\n5. Time\n6. Person\n7. Percent\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------\n\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\n* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram\n\n\nReleases\n--------",
"### Release v0.1 (May 29, 2019)\n\n\nThis is the first version of our ParsBERT NER!"
] | [
52,
351,
27
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### PEYMA\n\n\nPEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.\n\n\n1. Organization\n2. Money\n3. Location\n4. Date\n5. Time\n6. Person\n7. Percent\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------\n\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\n* And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: Linkedin, Behance, Instagram\n\n\nReleases\n--------### Release v0.1 (May 29, 2019)\n\n\nThis is the first version of our ParsBERT NER!"
] |
fill-mask | transformers | ## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
---
## Introduction
This model is pre-trained on a large Persian corpus with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 2M documents. A large subset of this corpus was crawled manually.
As a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpus into a proper format. This process produces more than 40M true sentences.
## Evaluation
ParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
### Sentiment Analysis (SA) task
| Dataset | ParsBERT | mBERT | DeepSentiPers |
|:--------------------------:|:---------:|:-----:|:-------------:|
| Digikala User Comments | 81.74* | 80.74 | - |
| SnappFood User Comments | 88.12* | 87.87 | - |
| SentiPers (Multi Class) | 71.11* | - | 69.33 |
| SentiPers (Binary Class) | 92.13* | - | 91.98 |
### Text Classification (TC) task
| Dataset | ParsBERT | mBERT |
|:-----------------:|:--------:|:-----:|
| Digikala Magazine | 93.59* | 90.72 |
| Persian News | 97.19* | 95.79 |
### Named Entity Recognition (NER) task
| Dataset | ParsBERT | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:-------:|:--------:|:--------:|:----------:|:--------------:|:----------:|:----------------:|:------------:|
| PEYMA | 93.10* | 86.64 | - | 90.59 | - | 84.00 | - |
| ARMAN | 98.79* | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 |
**If you tested ParsBERT on a public dataset and you want to add your results to the table above, open a pull request or contact us. Also make sure to have your code available online so we can add it as a reference**
## How to use
### TensorFlow 2.0
```python
from transformers import AutoConfig, AutoTokenizer, TFAutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
model = AutoModel.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
text = "ما در هوشواره معتقدیم با انتقال صحیح دانش و آگاهی، همه افراد میتوانند از ابزارهای هوشمند استفاده کنند. شعار ما هوش مصنوعی برای همه است."
tokenizer.tokenize(text)
>>> ['ما', 'در', 'هوش', '##واره', 'معتقدیم', 'با', 'انتقال', 'صحیح', 'دانش', 'و', 'اگاهی', '،', 'همه', 'افراد', 'میتوانند', 'از', 'ابزارهای', 'هوشمند', 'استفاده', 'کنند', '.', 'شعار', 'ما', 'هوش', 'مصنوعی', 'برای', 'همه', 'است', '.']
```
### Pytorch
```python
from transformers import AutoConfig, AutoTokenizer, AutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
model = AutoModel.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
```
## NLP Tasks Tutorial
Coming soon stay tuned
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
## Releases
### Release v0.1 (May 27, 2019)
This is the first version of our ParsBERT based on BERT<sub>BASE</sub>
| {} | HooshvareLab/bert-base-parsbert-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"arxiv:2005.12515",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2005.12515"
] | [] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #arxiv-2005.12515 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ParsBERT: Transformer-based Model for Persian Language Understanding
--------------------------------------------------------------------
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: arXiv:2005.12515
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
---
Introduction
------------
This model is pre-trained on a large Persian corpus with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 2M documents. A large subset of this corpus was crawled manually.
As a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpus into a proper format. This process produces more than 40M true sentences.
Evaluation
----------
ParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
### Sentiment Analysis (SA) task
### Text Classification (TC) task
### Named Entity Recognition (NER) task
If you tested ParsBERT on a public dataset and you want to add your results to the table above, open a pull request or contact us. Also make sure to have your code available online so we can add it as a reference
How to use
----------
### TensorFlow 2.0
### Pytorch
NLP Tasks Tutorial
------------------
Coming soon stay tuned
Cite
----
Please cite the following paper in your publication if you are using ParsBERT in your research:
Acknowledgments
---------------
We hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.
Contributors
------------
* Mehrdad Farahani: Linkedin, Twitter, Github
* Mohammad Gharachorloo: Linkedin, Twitter, Github
* Marzieh Farahani: Linkedin, Twitter, Github
* Mohammad Manthouri: Linkedin, Twitter, Github
* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram
Releases
--------
### Release v0.1 (May 27, 2019)
This is the first version of our ParsBERT based on BERTBASE
| [
"### Sentiment Analysis (SA) task",
"### Text Classification (TC) task",
"### Named Entity Recognition (NER) task\n\n\n\nIf you tested ParsBERT on a public dataset and you want to add your results to the table above, open a pull request or contact us. Also make sure to have your code available online so we can add it as a reference\n\n\nHow to use\n----------",
"### TensorFlow 2.0",
"### Pytorch\n\n\nNLP Tasks Tutorial\n------------------\n\n\nComing soon stay tuned\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\nReleases\n--------",
"### Release v0.1 (May 27, 2019)\n\n\nThis is the first version of our ParsBERT based on BERTBASE"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #arxiv-2005.12515 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Sentiment Analysis (SA) task",
"### Text Classification (TC) task",
"### Named Entity Recognition (NER) task\n\n\n\nIf you tested ParsBERT on a public dataset and you want to add your results to the table above, open a pull request or contact us. Also make sure to have your code available online so we can add it as a reference\n\n\nHow to use\n----------",
"### TensorFlow 2.0",
"### Pytorch\n\n\nNLP Tasks Tutorial\n------------------\n\n\nComing soon stay tuned\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\nReleases\n--------",
"### Release v0.1 (May 27, 2019)\n\n\nThis is the first version of our ParsBERT based on BERTBASE"
] | [
46,
9,
9,
72,
8,
233,
28
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #arxiv-2005.12515 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### Sentiment Analysis (SA) task### Text Classification (TC) task### Named Entity Recognition (NER) task\n\n\n\nIf you tested ParsBERT on a public dataset and you want to add your results to the table above, open a pull request or contact us. Also make sure to have your code available online so we can add it as a reference\n\n\nHow to use\n----------### TensorFlow 2.0### Pytorch\n\n\nNLP Tasks Tutorial\n------------------\n\n\nComing soon stay tuned\n\n\nCite\n----\n\n\nPlease cite the following paper in your publication if you are using ParsBERT in your research:\n\n\nAcknowledgments\n---------------\n\n\nWe hereby, express our gratitude to the Tensorflow Research Cloud (TFRC) program for providing us with the necessary computation resources. We also thank Hooshvare Research Group for facilitating dataset gathering and scraping online text resources.\n\n\nContributors\n------------\n\n\n* Mehrdad Farahani: Linkedin, Twitter, Github\n* Mohammad Gharachorloo: Linkedin, Twitter, Github\n* Marzieh Farahani: Linkedin, Twitter, Github\n* Mohammad Manthouri: Linkedin, Twitter, Github\n* Hooshvare Team: Official Website, Linkedin, Twitter, Github, Instagram\n\n\nReleases\n--------### Release v0.1 (May 27, 2019)\n\n\nThis is the first version of our ParsBERT based on BERTBASE"
] |
text-classification | transformers |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### DigiMag
A total of 8,515 articles scraped from [Digikala Online Magazine](https://www.digikala.com/mag/). This dataset includes seven different classes.
1. Video Games
2. Shopping Guide
3. Health Beauty
4. Science Technology
5. General
6. Art Cinema
7. Books Literature
| Label | # |
|:------------------:|:----:|
| Video Games | 1967 |
| Shopping Guide | 125 |
| Health Beauty | 1610 |
| Science Technology | 2772 |
| General | 120 |
| Art Cinema | 1667 |
| Books Literature | 254 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1YgrCYY-Z0h2z0-PfWVfOGt1Tv0JDI-qz)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
|:-----------------:|:-----------:|:-----------:|:-----:|
| Digikala Magazine | 93.65* | 93.59 | 90.72 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Text Classification | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-fa-base-uncased-clf-digimag | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ParsBERT (v2.0)
===============
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the ParsBERT repo for the latest information about previous and current models.
Persian Text Classification [DigiMag, Persian News]
---------------------------------------------------
The task target is labeling texts in a supervised manner in both existing datasets 'DigiMag' and 'Persian News'.
### DigiMag
A total of 8,515 articles scraped from Digikala Online Magazine. This dataset includes seven different classes.
1. Video Games
2. Shopping Guide
3. Health Beauty
4. Science Technology
5. General
6. Art Cinema
7. Books Literature
Download
You can download the dataset from here
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
### BibTeX entry and citation info
Please cite in publications as the following:
Questions?
----------
Post a Github issue on the ParsBERT Issues repo.
| [
"### DigiMag\n\n\nA total of 8,515 articles scraped from Digikala Online Magazine. This dataset includes seven different classes.\n\n\n1. Video Games\n2. Shopping Guide\n3. Health Beauty\n4. Science Technology\n5. General\n6. Art Cinema\n7. Books Literature\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### DigiMag\n\n\nA total of 8,515 articles scraped from Digikala Online Magazine. This dataset includes seven different classes.\n\n\n1. Video Games\n2. Shopping Guide\n3. Health Beauty\n4. Science Technology\n5. General\n6. Art Cinema\n7. Books Literature\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
43,
120,
45
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### DigiMag\n\n\nA total of 8,515 articles scraped from Digikala Online Magazine. This dataset includes seven different classes.\n\n\n1. Video Games\n2. Shopping Guide\n3. Health Beauty\n4. Science Technology\n5. General\n6. Art Cinema\n7. Books Literature\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] |
text-classification | transformers |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### Persian News
A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.
1. Economic
2. International
3. Political
4. Science Technology
5. Cultural Art
6. Sport
7. Medical
| Label | # |
|:------------------:|:----:|
| Social | 2170 |
| Economic | 1564 |
| International | 1975 |
| Political | 2269 |
| Science Technology | 2436 |
| Cultural Art | 2558 |
| Sport | 1381 |
| Medical | 2085 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
|:-----------------:|:-----------:|:-----------:|:-----:|
| Persian News | 97.44* | 97.19 | 95.79 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Text Classification | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-fa-base-uncased-clf-persiannews | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ParsBERT (v2.0)
===============
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the ParsBERT repo for the latest information about previous and current models.
Persian Text Classification [DigiMag, Persian News]
---------------------------------------------------
The task target is labeling texts in a supervised manner in both existing datasets 'DigiMag' and 'Persian News'.
### Persian News
A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.
1. Economic
2. International
3. Political
4. Science Technology
5. Cultural Art
6. Sport
7. Medical
Download
You can download the dataset from here
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
### BibTeX entry and citation info
Please cite in publications as the following:
Questions?
----------
Post a Github issue on the ParsBERT Issues repo.
| [
"### Persian News\n\n\nA dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.\n\n\n1. Economic\n2. International\n3. Political\n4. Science Technology\n5. Cultural Art\n6. Sport\n7. Medical\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Persian News\n\n\nA dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.\n\n\n1. Economic\n2. International\n3. Political\n4. Science Technology\n5. Cultural Art\n6. Sport\n7. Medical\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
47,
124,
45
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### Persian News\n\n\nA dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.\n\n\n1. Economic\n2. International\n3. Political\n4. Science Technology\n5. Cultural Art\n6. Sport\n7. Medical\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] |
token-classification | transformers |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian NER [ARMAN, PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`.
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
| Label | # |
|:------------:|:-----:|
| Organization | 30108 |
| Location | 12924 |
| Facility | 4458 |
| Event | 7557 |
| Product | 4389 |
| Person | 15645 |
**Download**
You can download the dataset from [here](https://github.com/HaniehP/PersianNER)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|---------|-------------|-------------|-------|------------|--------------|----------|----------------|------------|
| ARMAN | 99.84* | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-fa-base-uncased-ner-arman | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ParsBERT (v2.0)
===============
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the ParsBERT repo for the latest information about previous and current models.
Persian NER [ARMAN, PEYMA]
--------------------------
This task aims to extract named entities in the text, such as names and label with appropriate 'NER' classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with 'IOB' format. In this format, tokens that are not part of an entity are tagged as '”O”' the '”B”'tag corresponds to the first word of an object, and the '”I”' tag corresponds to the rest of the terms of the same entity. Both '”B”' and '”I”' tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, 'ARMAN', and 'PEYMA'.
### ARMAN
ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.
1. Organization
2. Location
3. Facility
4. Event
5. Product
6. Person
Download
You can download the dataset from here
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
### BibTeX entry and citation info
Please cite in publications as the following:
Questions?
----------
Post a Github issue on the ParsBERT Issues repo.
| [
"### ARMAN\n\n\nARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.\n\n\n1. Organization\n2. Location\n3. Facility\n4. Event\n5. Product\n6. Person\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### ARMAN\n\n\nARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.\n\n\n1. Organization\n2. Location\n3. Facility\n4. Event\n5. Product\n6. Person\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
43,
108,
45
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### ARMAN\n\n\nARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes.\n\n\n1. Organization\n2. Location\n3. Facility\n4. Event\n5. Product\n6. Person\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] |
token-classification | transformers |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian NER [ARMAN, PEYMA]
This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
| Label | # |
|:------------:|:-----:|
| Organization | 16964 |
| Money | 2037 |
| Location | 8782 |
| Date | 4259 |
| Time | 732 |
| Person | 7675 |
| Percent | 699 |
**Download**
You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|---------|-------------|-------------|-------|------------|--------------|----------|----------------|------------|
| PEYMA | 93.40* | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - |
## How to use :hugs:
| Notebook | Description | |
|:----------|:-------------|------:|
| [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-fa-base-uncased-ner-peyma | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ParsBERT (v2.0)
===============
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the ParsBERT repo for the latest information about previous and current models.
Persian NER [ARMAN, PEYMA]
--------------------------
This task aims to extract named entities in the text, such as names and label with appropriate 'NER' classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with 'IOB' format. In this format, tokens that are not part of an entity are tagged as '”O”' the '”B”'tag corresponds to the first word of an object, and the '”I”' tag corresponds to the rest of the terms of the same entity. Both '”B”' and '”I”' tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, 'ARMAN', and 'PEYMA'.
### PEYMA
PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.
1. Organization
2. Money
3. Location
4. Date
5. Time
6. Person
7. Percent
Download
You can download the dataset from here
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
### BibTeX entry and citation info
Please cite in publications as the following:
Questions?
----------
Post a Github issue on the ParsBERT Issues repo.
| [
"### PEYMA\n\n\nPEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.\n\n\n1. Organization\n2. Money\n3. Location\n4. Date\n5. Time\n6. Person\n7. Percent\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### PEYMA\n\n\nPEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.\n\n\n1. Organization\n2. Money\n3. Location\n4. Date\n5. Time\n6. Person\n7. Percent\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
43,
123,
45
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### PEYMA\n\n\nPEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes.\n\n\n1. Organization\n2. Money\n3. Location\n4. Date\n5. Time\n6. Person\n7. Percent\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] |
text-classification | transformers |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ParsBERT (v2.0)
===============
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the ParsBERT repo for the latest information about previous and current models.
Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
------------------------------------------------------
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: 'Digikala' user comments, 'SnappFood' user comments, and 'DeepSentiPers' in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
Binary:
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
Multi
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
Download
You can download the dataset from:
* SentiPers
* DeepSentiPers
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
### BibTeX entry and citation info
Please cite in publications as the following:
Questions?
----------
Post a Github issue on the ParsBERT Issues repo.
| [
"### DeepSentiPers\n\n\nwhich is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.\n\n\nBinary:\n\n\n1. Negative (Furious + Angry)\n2. Positive (Happy + Delighted)\n\n\nMulti\n\n\n1. Furious\n2. Angry\n3. Neutral\n4. Happy\n5. Delighted\n\n\n\nDownload\nYou can download the dataset from:\n\n\n* SentiPers\n* DeepSentiPers\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### DeepSentiPers\n\n\nwhich is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.\n\n\nBinary:\n\n\n1. Negative (Furious + Angry)\n2. Positive (Happy + Delighted)\n\n\nMulti\n\n\n1. Furious\n2. Angry\n3. Neutral\n4. Happy\n5. Delighted\n\n\n\nDownload\nYou can download the dataset from:\n\n\n* SentiPers\n* DeepSentiPers\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
43,
210,
45
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### DeepSentiPers\n\n\nwhich is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.\n\n\nBinary:\n\n\n1. Negative (Furious + Angry)\n2. Positive (Happy + Delighted)\n\n\nMulti\n\n\n1. Furious\n2. Angry\n3. Neutral\n4. Happy\n5. Delighted\n\n\n\nDownload\nYou can download the dataset from:\n\n\n* SentiPers\n* DeepSentiPers\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] |
text-classification | transformers |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ParsBERT (v2.0)
===============
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the ParsBERT repo for the latest information about previous and current models.
Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
------------------------------------------------------
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: 'Digikala' user comments, 'SnappFood' user comments, and 'DeepSentiPers' in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
Binary:
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
Multi
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
Download
You can download the dataset from:
* SentiPers
* DeepSentiPers
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
### BibTeX entry and citation info
Please cite in publications as the following:
Questions?
----------
Post a Github issue on the ParsBERT Issues repo.
| [
"### DeepSentiPers\n\n\nwhich is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.\n\n\nBinary:\n\n\n1. Negative (Furious + Angry)\n2. Positive (Happy + Delighted)\n\n\nMulti\n\n\n1. Furious\n2. Angry\n3. Neutral\n4. Happy\n5. Delighted\n\n\n\nDownload\nYou can download the dataset from:\n\n\n* SentiPers\n* DeepSentiPers\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### DeepSentiPers\n\n\nwhich is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.\n\n\nBinary:\n\n\n1. Negative (Furious + Angry)\n2. Positive (Happy + Delighted)\n\n\nMulti\n\n\n1. Furious\n2. Angry\n3. Neutral\n4. Happy\n5. Delighted\n\n\n\nDownload\nYou can download the dataset from:\n\n\n* SentiPers\n* DeepSentiPers\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
43,
210,
45
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### DeepSentiPers\n\n\nwhich is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.\n\n\nBinary:\n\n\n1. Negative (Furious + Angry)\n2. Positive (Happy + Delighted)\n\n\nMulti\n\n\n1. Furious\n2. Angry\n3. Neutral\n4. Happy\n5. Delighted\n\n\n\nDownload\nYou can download the dataset from:\n\n\n* SentiPers\n* DeepSentiPers\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] |
text-classification | transformers |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### Digikala
Digikala user comments provided by [Open Data Mining Program (ODMP)](https://www.digikala.com/opendata/). This dataset contains 62,321 user comments with three labels:
| Label | # |
|:---------------:|:------:|
| no_idea | 10394 |
| not_recommended | 15885 |
| recommended | 36042 |
**Download**
You can download the dataset from [here](https://www.digikala.com/opendata/)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| Digikala User Comments | 81.72 | 81.74* | 80.74 | - |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-fa-base-uncased-sentiment-digikala | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ParsBERT (v2.0)
===============
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the ParsBERT repo for the latest information about previous and current models.
Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
------------------------------------------------------
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: 'Digikala' user comments, 'SnappFood' user comments, and 'DeepSentiPers' in two binary-form and multi-form types.
### Digikala
Digikala user comments provided by Open Data Mining Program (ODMP). This dataset contains 62,321 user comments with three labels:
Download
You can download the dataset from here
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
### BibTeX entry and citation info
Please cite in publications as the following:
Questions?
----------
Post a Github issue on the ParsBERT Issues repo.
| [
"### Digikala\n\n\nDigikala user comments provided by Open Data Mining Program (ODMP). This dataset contains 62,321 user comments with three labels:\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Digikala\n\n\nDigikala user comments provided by Open Data Mining Program (ODMP). This dataset contains 62,321 user comments with three labels:\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
43,
99,
45
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Digikala\n\n\nDigikala user comments provided by Open Data Mining Program (ODMP). This dataset contains 62,321 user comments with three labels:\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] |
text-classification | transformers |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### SnappFood
[Snappfood](https://snappfood.ir/) (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification):
1. Happy
2. Sad
| Label | # |
|:--------:|:-----:|
| Negative | 35000 |
| Positive | 35000 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=15J4zPN1BD7Q_ZIQ39VeFquwSoW8qTxgu)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| SnappFood User Comments | 87.98 | 88.12* | 87.87 | - |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Sentiment Analysis | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-fa-base-uncased-sentiment-snappfood | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| ParsBERT (v2.0)
===============
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the ParsBERT repo for the latest information about previous and current models.
Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
------------------------------------------------------
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: 'Digikala' user comments, 'SnappFood' user comments, and 'DeepSentiPers' in two binary-form and multi-form types.
### SnappFood
Snappfood (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification):
1. Happy
2. Sad
Download
You can download the dataset from here
Results
-------
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
How to use :hugs:
-----------------
### BibTeX entry and citation info
Please cite in publications as the following:
Questions?
----------
Post a Github issue on the ParsBERT Issues repo.
| [
"### SnappFood\n\n\nSnappfood (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification):\n\n\n1. Happy\n2. Sad\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### SnappFood\n\n\nSnappfood (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification):\n\n\n1. Happy\n2. Sad\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
43,
105,
45
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### SnappFood\n\n\nSnappfood (an online food delivery company) user comments containing 70,000 comments with two labels (i.e. polarity classification):\n\n\n1. Happy\n2. Sad\n\n\n\nDownload\nYou can download the dataset from here\n\n\nResults\n-------\n\n\nThe following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.\n\n\n\nHow to use :hugs:\n-----------------### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] |
fill-mask | transformers |
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Introduction
ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than `3.9M` documents, `73M` sentences, and `1.3B` words.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=bert-fa) to look for
fine-tuned versions on a task that interests you.
### How to use
#### TensorFlow 2.0
```python
from transformers import AutoConfig, AutoTokenizer, TFAutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-fa-base-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-fa-base-uncased")
model = TFAutoModel.from_pretrained("HooshvareLab/bert-fa-base-uncased")
text = "ما در هوشواره معتقدیم با انتقال صحیح دانش و آگاهی، همه افراد میتوانند از ابزارهای هوشمند استفاده کنند. شعار ما هوش مصنوعی برای همه است."
tokenizer.tokenize(text)
>>> ['ما', 'در', 'هوش', '##واره', 'معتقدیم', 'با', 'انتقال', 'صحیح', 'دانش', 'و', 'اگاهی', '،', 'همه', 'افراد', 'میتوانند', 'از', 'ابزارهای', 'هوشمند', 'استفاده', 'کنند', '.', 'شعار', 'ما', 'هوش', 'مصنوعی', 'برای', 'همه', 'است', '.']
```
#### Pytorch
```python
from transformers import AutoConfig, AutoTokenizer, AutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-fa-base-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-fa-base-uncased")
model = AutoModel.from_pretrained("HooshvareLab/bert-fa-base-uncased")
```
## Training
ParsBERT trained on a massive amount of public corpora ([Persian Wikidumps](https://dumps.wikimedia.org/fawiki/), [MirasText](https://github.com/miras-tech/MirasText)) and six other manually crawled text data from a various type of websites ([BigBang Page](https://bigbangpage.com/) `scientific`, [Chetor](https://www.chetor.com/) `lifestyle`, [Eligasht](https://www.eligasht.com/Blog/) `itinerary`, [Digikala](https://www.digikala.com/mag/) `digital magazine`, [Ted Talks](https://www.ted.com/talks) `general conversational`, Books `novels, storybooks, short stories from old to the contemporary era`).
As a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpora into a proper format.
## Goals
Objective goals during training are as below (after 300k steps).
``` bash
***** Eval results *****
global_step = 300000
loss = 1.4392426
masked_lm_accuracy = 0.6865794
masked_lm_loss = 1.4469004
next_sentence_accuracy = 1.0
next_sentence_loss = 6.534152e-05
```
## Derivative models
### Base Config
#### ParsBERT v2.0 Model
- [HooshvareLab/bert-fa-base-uncased](https://huggingface.co/HooshvareLab/bert-fa-base-uncased)
#### ParsBERT v2.0 Sentiment Analysis
- [HooshvareLab/bert-fa-base-uncased-sentiment-digikala](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-digikala)
- [HooshvareLab/bert-fa-base-uncased-sentiment-snappfood](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-snappfood)
- [HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary)
- [HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi)
#### ParsBERT v2.0 Text Classification
- [HooshvareLab/bert-fa-base-uncased-clf-digimag](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-clf-digimag)
- [HooshvareLab/bert-fa-base-uncased-clf-persiannews](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-clf-persiannews)
#### ParsBERT v2.0 NER
- [HooshvareLab/bert-fa-base-uncased-ner-peyma](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-ner-peyma)
- [HooshvareLab/bert-fa-base-uncased-ner-arman](https://huggingface.co/HooshvareLab/bert-fa-base-uncased-ner-arman)
## Eval results
ParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.
### Sentiment Analysis (SA) Task
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------:|:-----------:|:-----:|:-------------:|
| Digikala User Comments | 81.72 | 81.74* | 80.74 | - |
| SnappFood User Comments | 87.98 | 88.12* | 87.87 | - |
| SentiPers (Multi Class) | 71.31* | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 92.42* | 92.13 | - | 91.98 |
### Text Classification (TC) Task
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
|:-----------------:|:-----------:|:-----------:|:-----:|
| Digikala Magazine | 93.65* | 93.59 | 90.72 |
| Persian News | 97.44* | 97.19 | 95.79 |
### Named Entity Recognition (NER) Task
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:-------:|:-----------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:|
| PEYMA | 93.40* | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - |
| ARMAN | 99.84* | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
| {"language": "fa", "license": "apache-2.0", "tags": ["bert-fa", "bert-persian", "persian-lm"]} | HooshvareLab/bert-fa-base-uncased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"bert-fa",
"bert-persian",
"persian-lm",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2005.12515"
] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #bert-fa #bert-persian #persian-lm #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ParsBERT (v2.0)
===============
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the ParsBERT repo for the latest information about previous and current models.
Introduction
------------
ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than '3.9M' documents, '73M' sentences, and '1.3B' words.
Paper presenting ParsBERT: arXiv:2005.12515
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
### How to use
#### TensorFlow 2.0
#### Pytorch
Training
--------
ParsBERT trained on a massive amount of public corpora (Persian Wikidumps, MirasText) and six other manually crawled text data from a various type of websites (BigBang Page 'scientific', Chetor 'lifestyle', Eligasht 'itinerary', Digikala 'digital magazine', Ted Talks 'general conversational', Books 'novels, storybooks, short stories from old to the contemporary era').
As a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpora into a proper format.
Goals
-----
Objective goals during training are as below (after 300k steps).
Derivative models
-----------------
### Base Config
#### ParsBERT v2.0 Model
* HooshvareLab/bert-fa-base-uncased
#### ParsBERT v2.0 Sentiment Analysis
* HooshvareLab/bert-fa-base-uncased-sentiment-digikala
* HooshvareLab/bert-fa-base-uncased-sentiment-snappfood
* HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary
* HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi
#### ParsBERT v2.0 Text Classification
* HooshvareLab/bert-fa-base-uncased-clf-digimag
* HooshvareLab/bert-fa-base-uncased-clf-persiannews
#### ParsBERT v2.0 NER
* HooshvareLab/bert-fa-base-uncased-ner-peyma
* HooshvareLab/bert-fa-base-uncased-ner-arman
Eval results
------------
ParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.
### Sentiment Analysis (SA) Task
### Text Classification (TC) Task
### Named Entity Recognition (NER) Task
### BibTeX entry and citation info
Please cite in publications as the following:
Questions?
----------
Post a Github issue on the ParsBERT Issues repo.
| [
"### How to use",
"#### TensorFlow 2.0",
"#### Pytorch\n\n\nTraining\n--------\n\n\nParsBERT trained on a massive amount of public corpora (Persian Wikidumps, MirasText) and six other manually crawled text data from a various type of websites (BigBang Page 'scientific', Chetor 'lifestyle', Eligasht 'itinerary', Digikala 'digital magazine', Ted Talks 'general conversational', Books 'novels, storybooks, short stories from old to the contemporary era').\n\n\nAs a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpora into a proper format.\n\n\nGoals\n-----\n\n\nObjective goals during training are as below (after 300k steps).\n\n\nDerivative models\n-----------------",
"### Base Config",
"#### ParsBERT v2.0 Model\n\n\n* HooshvareLab/bert-fa-base-uncased",
"#### ParsBERT v2.0 Sentiment Analysis\n\n\n* HooshvareLab/bert-fa-base-uncased-sentiment-digikala\n* HooshvareLab/bert-fa-base-uncased-sentiment-snappfood\n* HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary\n* HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi",
"#### ParsBERT v2.0 Text Classification\n\n\n* HooshvareLab/bert-fa-base-uncased-clf-digimag\n* HooshvareLab/bert-fa-base-uncased-clf-persiannews",
"#### ParsBERT v2.0 NER\n\n\n* HooshvareLab/bert-fa-base-uncased-ner-peyma\n* HooshvareLab/bert-fa-base-uncased-ner-arman\n\n\nEval results\n------------\n\n\nParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.",
"### Sentiment Analysis (SA) Task",
"### Text Classification (TC) Task",
"### Named Entity Recognition (NER) Task",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #bert-fa #bert-persian #persian-lm #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use",
"#### TensorFlow 2.0",
"#### Pytorch\n\n\nTraining\n--------\n\n\nParsBERT trained on a massive amount of public corpora (Persian Wikidumps, MirasText) and six other manually crawled text data from a various type of websites (BigBang Page 'scientific', Chetor 'lifestyle', Eligasht 'itinerary', Digikala 'digital magazine', Ted Talks 'general conversational', Books 'novels, storybooks, short stories from old to the contemporary era').\n\n\nAs a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpora into a proper format.\n\n\nGoals\n-----\n\n\nObjective goals during training are as below (after 300k steps).\n\n\nDerivative models\n-----------------",
"### Base Config",
"#### ParsBERT v2.0 Model\n\n\n* HooshvareLab/bert-fa-base-uncased",
"#### ParsBERT v2.0 Sentiment Analysis\n\n\n* HooshvareLab/bert-fa-base-uncased-sentiment-digikala\n* HooshvareLab/bert-fa-base-uncased-sentiment-snappfood\n* HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary\n* HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi",
"#### ParsBERT v2.0 Text Classification\n\n\n* HooshvareLab/bert-fa-base-uncased-clf-digimag\n* HooshvareLab/bert-fa-base-uncased-clf-persiannews",
"#### ParsBERT v2.0 NER\n\n\n* HooshvareLab/bert-fa-base-uncased-ner-peyma\n* HooshvareLab/bert-fa-base-uncased-ner-arman\n\n\nEval results\n------------\n\n\nParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.",
"### Sentiment Analysis (SA) Task",
"### Text Classification (TC) Task",
"### Named Entity Recognition (NER) Task",
"### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] | [
69,
6,
9,
186,
7,
27,
103,
57,
178,
9,
9,
11,
45
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #bert-fa #bert-persian #persian-lm #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### How to use#### TensorFlow 2.0#### Pytorch\n\n\nTraining\n--------\n\n\nParsBERT trained on a massive amount of public corpora (Persian Wikidumps, MirasText) and six other manually crawled text data from a various type of websites (BigBang Page 'scientific', Chetor 'lifestyle', Eligasht 'itinerary', Digikala 'digital magazine', Ted Talks 'general conversational', Books 'novels, storybooks, short stories from old to the contemporary era').\n\n\nAs a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpora into a proper format.\n\n\nGoals\n-----\n\n\nObjective goals during training are as below (after 300k steps).\n\n\nDerivative models\n-----------------### Base Config#### ParsBERT v2.0 Model\n\n\n* HooshvareLab/bert-fa-base-uncased#### ParsBERT v2.0 Sentiment Analysis\n\n\n* HooshvareLab/bert-fa-base-uncased-sentiment-digikala\n* HooshvareLab/bert-fa-base-uncased-sentiment-snappfood\n* HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-binary\n* HooshvareLab/bert-fa-base-uncased-sentiment-deepsentipers-multi#### ParsBERT v2.0 Text Classification\n\n\n* HooshvareLab/bert-fa-base-uncased-clf-digimag\n* HooshvareLab/bert-fa-base-uncased-clf-persiannews#### ParsBERT v2.0 NER\n\n\n* HooshvareLab/bert-fa-base-uncased-ner-peyma\n* HooshvareLab/bert-fa-base-uncased-ner-arman\n\n\nEval results\n------------\n\n\nParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.### Sentiment Analysis (SA) Task### Text Classification (TC) Task### Named Entity Recognition (NER) Task### BibTeX entry and citation info\n\n\nPlease cite in publications as the following:\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsBERT Issues repo."
] |
token-classification | transformers |
# BertNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Bert | 0.995086 | 0.953454 | 0.961113 | 0.957268 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.860636 | 0.864865 | 0.862745 |
| EVE | 256 | 0.969582 | 0.996094 | 0.982659 |
| FAC | 248 | 0.976190 | 0.991935 | 0.984000 |
| LOC | 2884 | 0.970232 | 0.971914 | 0.971072 |
| MON | 98 | 0.905263 | 0.877551 | 0.891192 |
| ORG | 3216 | 0.939125 | 0.954602 | 0.946800 |
| PCT | 94 | 1.000000 | 0.968085 | 0.983784 |
| PER | 2645 | 0.965244 | 0.965974 | 0.965608 |
| PRO | 318 | 0.981481 | 1.000000 | 0.990654 |
| TIM | 43 | 0.692308 | 0.837209 | 0.757895 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/bert-fa-zwnj-base-ner"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. | {"language": "fa"} | HooshvareLab/bert-fa-zwnj-base-ner | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us
| BertNER
=======
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from ARMAN, PEYMA, and WikiANN that covered ten types of entities:
* Date (DAT)
* Event (EVE)
* Facility (FAC)
* Location (LOC)
* Money (MON)
* Organization (ORG)
* Percent (PCT)
* Person (PER)
* Product (PRO)
* Time (TIM)
Dataset Information
-------------------
Evaluation
----------
The following tables summarize the scores obtained by model overall and per each class.
Overall
Per entities
How To Use
----------
You use this model with Transformers pipeline for NER.
### Installing requirements
### How to predict using pipeline
Questions?
----------
Post a Github issue on the ParsNER Issues repo.
| [
"### Installing requirements",
"### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us \n",
"### Installing requirements",
"### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] | [
35,
5,
34
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us \n### Installing requirements### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] |
fill-mask | transformers |
# ParsBERT (v3.0)
A Transformer-based Model for Persian Language Understanding
The new version of BERT v3.0 for Persian is available today and can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.
## Introduction
ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news).
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/bert-fa-zwnj-base | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"fa",
"arxiv:2005.12515",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2005.12515"
] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# ParsBERT (v3.0)
A Transformer-based Model for Persian Language Understanding
The new version of BERT v3.0 for Persian is available today and can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.
## Introduction
ParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news).
Paper presenting ParsBERT: arXiv:2005.12515
### BibTeX entry and citation info
Please cite in publications as the following:
## Questions?
Post a Github issue on the ParsBERT Issues repo. | [
"# ParsBERT (v3.0)\nA Transformer-based Model for Persian Language Understanding\n\nThe new version of BERT v3.0 for Persian is available today and can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.",
"## Introduction\n\nParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news).\n \nPaper presenting ParsBERT: arXiv:2005.12515",
"### BibTeX entry and citation info\n\nPlease cite in publications as the following:",
"## Questions?\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# ParsBERT (v3.0)\nA Transformer-based Model for Persian Language Understanding\n\nThe new version of BERT v3.0 for Persian is available today and can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.",
"## Introduction\n\nParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news).\n \nPaper presenting ParsBERT: arXiv:2005.12515",
"### BibTeX entry and citation info\n\nPlease cite in publications as the following:",
"## Questions?\nPost a Github issue on the ParsBERT Issues repo."
] | [
56,
70,
66,
18,
19
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #fa #arxiv-2005.12515 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# ParsBERT (v3.0)\nA Transformer-based Model for Persian Language Understanding\n\nThe new version of BERT v3.0 for Persian is available today and can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.## Introduction\n\nParsBERT is a monolingual language model based on Google’s BERT architecture. This model is pre-trained on large Persian corpora with various writing styles from numerous subjects (e.g., scientific, novels, news).\n \nPaper presenting ParsBERT: arXiv:2005.12515### BibTeX entry and citation info\n\nPlease cite in publications as the following:## Questions?\nPost a Github issue on the ParsBERT Issues repo."
] |
token-classification | transformers |
# DistilbertNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Distilbert | 0.994534 | 0.946326 | 0.95504 | 0.950663 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.812048 | 0.828010 | 0.819951 |
| EVE | 256 | 0.955056 | 0.996094 | 0.975143 |
| FAC | 248 | 0.972549 | 1.000000 | 0.986083 |
| LOC | 2884 | 0.968403 | 0.967060 | 0.967731 |
| MON | 98 | 0.925532 | 0.887755 | 0.906250 |
| ORG | 3216 | 0.932095 | 0.951803 | 0.941846 |
| PCT | 94 | 0.936842 | 0.946809 | 0.941799 |
| PER | 2645 | 0.959818 | 0.957278 | 0.958546 |
| PRO | 318 | 0.963526 | 0.996855 | 0.979907 |
| TIM | 43 | 0.760870 | 0.813953 | 0.786517 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/distilbert-fa-zwnj-base-ner"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. | {"language": "fa"} | HooshvareLab/distilbert-fa-zwnj-base-ner | null | [
"transformers",
"pytorch",
"tf",
"distilbert",
"token-classification",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #distilbert #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us
| DistilbertNER
=============
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from ARMAN, PEYMA, and WikiANN that covered ten types of entities:
* Date (DAT)
* Event (EVE)
* Facility (FAC)
* Location (LOC)
* Money (MON)
* Organization (ORG)
* Percent (PCT)
* Person (PER)
* Product (PRO)
* Time (TIM)
Dataset Information
-------------------
Evaluation
----------
The following tables summarize the scores obtained by model overall and per each class.
Overall
Per entities
How To Use
----------
You use this model with Transformers pipeline for NER.
### Installing requirements
### How to predict using pipeline
Questions?
----------
Post a Github issue on the ParsNER Issues repo.
| [
"### Installing requirements",
"### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #distilbert #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us \n",
"### Installing requirements",
"### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] | [
35,
5,
34
] | [
"TAGS\n#transformers #pytorch #tf #distilbert #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us \n### Installing requirements### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] |
fill-mask | transformers |
# DistilBERT
This model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/distilbert-fa-zwnj-base | null | [
"transformers",
"pytorch",
"tf",
"distilbert",
"fill-mask",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #distilbert #fill-mask #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# DistilBERT
This model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.
## Questions?
Post a Github issue on the ParsBERT Issues repo. | [
"# DistilBERT\n\nThis model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.",
"## Questions?\nPost a Github issue on the ParsBERT Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #distilbert #fill-mask #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# DistilBERT\n\nThis model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.",
"## Questions?\nPost a Github issue on the ParsBERT Issues repo."
] | [
43,
41,
19
] | [
"TAGS\n#transformers #pytorch #tf #distilbert #fill-mask #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# DistilBERT\n\nThis model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.## Questions?\nPost a Github issue on the ParsBERT Issues repo."
] |
text-generation | transformers |
# Persian Comment Generator
The model can generate comments based on your aspects, and the model was fine-tuned on [persiannlp/parsinlu](https://github.com/persiannlp/parsinlu). Currently, the model only supports aspects in the food and movie scope. You can see the whole aspects in the following section.
## Comments Aspects
```text
<s>نمونه دیدگاه هم خوب هم بد به طور کلی <sep>
<s>نمونه دیدگاه خوب به طور کلی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و کیفیت <sep>
<s>نمونه دیدگاه خوب از نظر ارزش غذایی و ارزش خرید <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر طعم و بسته بندی <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت <sep>
<s>نمونه دیدگاه منفی از نظر کیفیت <sep>
<s>نمونه دیدگاه خوب از نظر طعم <sep>
<s>نمونه دیدگاه خیلی خوب به طور کلی <sep>
<s>نمونه دیدگاه خوب از نظر بسته بندی <sep>
<s>نمونه دیدگاه منفی از نظر کیفیت و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارسال و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و طعم <sep>
<s>نمونه دیدگاه منفی به طور کلی <sep>
<s>نمونه دیدگاه خوب از نظر ارزش خرید <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و بسته بندی و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و کیفیت <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر طعم و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و ارزش خرید <sep>
<s>نمونه دیدگاه منفی از نظر ارسال <sep>
<s>نمونه دیدگاه منفی از نظر طعم <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و ارزش خرید <sep>
<s>نمونه دیدگاه نظری ندارم به طور کلی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم <sep>
<s>نمونه دیدگاه خیلی منفی به طور کلی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر بسته بندی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و کیفیت و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید <sep>
<s>نمونه دیدگاه منفی از نظر کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و بسته بندی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت <sep>
<s>نمونه دیدگاه منفی از نظر طعم و کیفیت <sep>
<s>نمونه دیدگاه خوب از نظر طعم و کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارسال <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و طعم <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و کیفیت و بسته بندی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر بسته بندی و طعم و ارزش خرید <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و بسته بندی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و کیفیت و بسته بندی <sep>
<s>نمونه دیدگاه خوب از نظر ارزش خرید و بسته بندی و کیفیت <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر طعم و کیفیت <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بسته بندی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و کیفیت <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارزش خرید و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و بسته بندی و ارسال <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و ارزش غذایی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت <sep>
<s>نمونه دیدگاه منفی از نظر بسته بندی <sep>
<s>نمونه دیدگاه خوب از نظر طعم و کیفیت <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و ارزش غذایی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه خوب از نظر طعم و کیفیت و بسته بندی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید <sep>
<s>نمونه دیدگاه منفی از نظر ارسال و کیفیت <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید <sep>
<s>نمونه دیدگاه خیلی منفی از نظر بسته بندی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و بسته بندی و ارزش خرید <sep>
<s>نمونه دیدگاه خوب از نظر طعم و ارزش غذایی <sep>
<s>نمونه دیدگاه منفی از نظر ارزش خرید <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و طعم <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و بسته بندی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر بسته بندی و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و ارزش غذایی <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و طعم <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر طعم و ارسال <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی <sep>
<s>نمونه دیدگاه خوب از نظر ارزش خرید و کیفیت <sep>
<s>نمونه دیدگاه خوب از نظر ارزش غذایی <sep>
<s>نمونه دیدگاه خوب از نظر طعم و ارزش خرید <sep>
<s>نمونه دیدگاه منفی از نظر طعم و ارزش خرید <sep>
<s>نمونه دیدگاه منفی از نظر ارزش خرید و کیفیت <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و ارزش خرید و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و ارسال و طعم و ارزش خرید <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و طعم و ارزش خرید <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و بسته بندی و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه منفی از نظر ارزش خرید و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و بسته بندی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه منفی از نظر بسته بندی و کیفیت و طعم <sep>
<s>نمونه دیدگاه خوب از نظر ارسال <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و بسته بندی و ارزش غذایی و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی و کیفیت <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و طعم و ارزش خرید <sep>
<s>نمونه دیدگاه خوب از نظر طعم و ارسال <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه خوب از نظر بسته بندی و ارزش خرید <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارزش غذایی و طعم <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و ارزش خرید و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارزش غذایی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید و کیفیت <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارزش غذایی و ارزش خرید <sep>
<s>نمونه دیدگاه منفی از نظر طعم و ارزش غذایی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارسال <sep>
<s>نمونه دیدگاه خوب از نظر ارزش خرید و طعم <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارزش غذایی و بسته بندی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر طعم و ارزش غذایی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر طعم و کیفیت و ارسال <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و بسته بندی و طعم و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و ارزش غذایی <sep>
<s>نمونه دیدگاه خوب از نظر بسته بندی و طعم و کیفیت <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و ارزش غذایی <sep>
<s>نمونه دیدگاه خوب از نظر ارسال و طعم <sep>
<s>نمونه دیدگاه خوب از نظر ارزش خرید و ارسال <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارزش غذایی و کیفیت <sep>
<s>نمونه دیدگاه خوب از نظر ارزش خرید و بسته بندی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و طعم و بسته بندی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و طعم و کیفیت <sep>
<s>نمونه دیدگاه خیلی منفی از نظر بسته بندی و کیفیت <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و کیفیت و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و ارزش خرید و کیفیت <sep>
<s>نمونه دیدگاه منفی از نظر بسته بندی و کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و کیفیت و ارزش خرید و بسته بندی <sep>
<s>نمونه دیدگاه خوب از نظر ارزش غذایی و ارسال <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و طعم و ارزش خرید و ارسال <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارسال و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و بسته بندی و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارسال و بسته بندی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و ارزش خرید و ارسال <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و ارزش خرید و طعم <sep>
<s>نمونه دیدگاه خوب از نظر بسته بندی و کیفیت <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر بسته بندی و کیفیت <sep>
<s>نمونه دیدگاه خوب از نظر ارزش خرید و بسته بندی و ارسال <sep>
<s>نمونه دیدگاه خیلی منفی از نظر بسته بندی و طعم و ارزش خرید <sep>
<s>نمونه دیدگاه نظری ندارم از نظر بسته بندی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و بسته بندی و طعم <sep>
<s>نمونه دیدگاه خوب از نظر طعم و بسته بندی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و ارزش خرید و بسته بندی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و بسته بندی <sep>
<s>نمونه دیدگاه خوب از نظر ارزش خرید و ارزش غذایی <sep>
<s>نمونه دیدگاه منفی از نظر طعم و بسته بندی <sep>
<s>نمونه دیدگاه منفی از نظر کیفیت و بسته بندی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارزش غذایی و بسته بندی <sep>
<s>نمونه دیدگاه خوب از نظر ارسال و بسته بندی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارسال <sep>
<s>نمونه دیدگاه نظری ندارم از نظر طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و بسته بندی <sep>
<s>نمونه دیدگاه منفی از نظر ارزش غذایی <sep>
<s>نمونه دیدگاه خوب از نظر بسته بندی و طعم <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارسال و کیفیت <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت و بسته بندی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت و بسته بندی و ارزش غذایی <sep>
<s>نمونه دیدگاه خوب از نظر طعم و بسته بندی و ارزش خرید <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و ارسال <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و کیفیت و ارزش غذایی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و طعم و ارزش غذایی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارسال و ارزش خرید <sep>
<s>نمونه دیدگاه نظری ندارم از نظر ارزش غذایی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارسال و ارزش خرید و کیفیت <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و طعم و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کیفیت و ارسال و بسته بندی <sep>
<s>نمونه دیدگاه منفی از نظر بسته بندی و طعم و کیفیت <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بسته بندی و ارسال <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارسال و کیفیت <sep>
<s>نمونه دیدگاه خوب از نظر کیفیت و ارسال <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید و ارزش غذایی <sep>
<s>نمونه دیدگاه خوب از نظر ارزش غذایی و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و ارزش غذایی و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارسال و بسته بندی و کیفیت <sep>
<s>نمونه دیدگاه منفی از نظر بسته بندی و طعم <sep>
<s>نمونه دیدگاه منفی از نظر بسته بندی و ارزش غذایی <sep>
<s>نمونه دیدگاه منفی از نظر طعم و کیفیت و ارزش خرید <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر بسته بندی و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و ارزش غذایی و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش خرید و طعم و بسته بندی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کیفیت و بسته بندی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارزش خرید و کیفیت و طعم <sep>
<s>نمونه دیدگاه منفی از نظر ارزش خرید و کیفیت و طعم <sep>
<s>نمونه دیدگاه منفی از نظر کیفیت و طعم و ارزش غذایی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارسال و کیفیت و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر طعم و بسته بندی و ارسال <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و بسته بندی و طعم <sep>
<s>نمونه دیدگاه خیلی خوب از نظر ارزش غذایی و طعم و کیفیت <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارزش غذایی و کیفیت <sep>
<s>نمونه دیدگاه منفی از نظر ارزش خرید و طعم و کیفیت <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کیفیت و طعم و بسته بندی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر ارسال و ارزش خرید <sep>
<s>نمونه دیدگاه خیلی منفی از نظر ارزش خرید و طعم و کیفیت <sep>
<s>نمونه دیدگاه خیلی منفی از نظر طعم و ارسال <sep>
<s>نمونه دیدگاه منفی از نظر موسیقی و بازی <sep>
<s>نمونه دیدگاه منفی از نظر داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر صدا <sep>
<s>نمونه دیدگاه خیلی منفی از نظر داستان <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر داستان و فیلمبرداری و کارگردانی و بازی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر بازی <sep>
<s>نمونه دیدگاه منفی از نظر داستان و بازی <sep>
<s>نمونه دیدگاه منفی از نظر بازی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر داستان و کارگردانی و بازی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر داستان و بازی <sep>
<s>نمونه دیدگاه خوب از نظر بازی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر بازی و داستان و کارگردانی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی <sep>
<s>نمونه دیدگاه خوب از نظر بازی و داستان <sep>
<s>نمونه دیدگاه خوب از نظر داستان و بازی <sep>
<s>نمونه دیدگاه خوب از نظر داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر داستان و بازی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و داستان <sep>
<s>نمونه دیدگاه خیلی منفی از نظر داستان و کارگردانی و فیلمبرداری <sep>
<s>نمونه دیدگاه خیلی منفی از نظر بازی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کارگردانی <sep>
<s>نمونه دیدگاه منفی از نظر کارگردانی و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و بازی <sep>
<s>نمونه دیدگاه خوب از نظر کارگردانی و بازی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر صحنه و کارگردانی <sep>
<s>نمونه دیدگاه منفی از نظر بازی و کارگردانی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و داستان و کارگردانی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کارگردانی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر فیلمبرداری <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و کارگردانی و فیلمبرداری و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و بازی و موسیقی <sep>
<s>نمونه دیدگاه خوب از نظر صحنه و بازی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و موسیقی و کارگردانی <sep>
<s>نمونه دیدگاه خوب از نظر داستان و کارگردانی <sep>
<s>نمونه دیدگاه خوب از نظر بازی و کارگردانی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر بازی و کارگردانی <sep>
<s>نمونه دیدگاه منفی از نظر کارگردانی و موسیقی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر بازی و داستان <sep>
<s>نمونه دیدگاه خوب از نظر کارگردانی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر بازی و کارگردانی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و داستان <sep>
<s>نمونه دیدگاه خیلی منفی از نظر داستان و کارگردانی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر داستان و کارگردانی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر داستان <sep>
<s>نمونه دیدگاه خوب از نظر بازی و داستان و موسیقی و کارگردانی و فیلمبرداری <sep>
<s>نمونه دیدگاه خیلی منفی از نظر داستان و بازی و کارگردانی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر بازی و داستان <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر داستان و بازی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر داستان و بازی و کارگردانی <sep>
<s>نمونه دیدگاه منفی از نظر بازی و داستان <sep>
<s>نمونه دیدگاه خوب از نظر فیلمبرداری و صحنه و موسیقی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر داستان و کارگردانی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر داستان و کارگردانی و بازی <sep>
<s>نمونه دیدگاه نظری ندارم از نظر بازی <sep>
<s>نمونه دیدگاه منفی از نظر داستان و کارگردانی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر داستان و بازی و صحنه <sep>
<s>نمونه دیدگاه خوب از نظر کارگردانی و داستان و بازی و فیلمبرداری <sep>
<s>نمونه دیدگاه خوب از نظر بازی و صحنه و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و صحنه و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و موسیقی و فیلمبرداری <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و صحنه <sep>
<s>نمونه دیدگاه خیلی خوب از نظر فیلمبرداری و صحنه و داستان و کارگردانی <sep>
<s>نمونه دیدگاه منفی از نظر کارگردانی و بازی <sep>
<s>نمونه دیدگاه منفی از نظر کارگردانی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر داستان و فیلمبرداری <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و بازی و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر فیلمبرداری و بازی و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر کارگردانی و بازی و داستان و صحنه <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر موسیقی و کارگردانی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کارگردانی و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر موسیقی و صحنه <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر صحنه و فیلمبرداری و داستان و بازی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و داستان و موسیقی و فیلمبرداری <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و فیلمبرداری <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کارگردانی و صدا و صحنه و داستان <sep>
<s>نمونه دیدگاه خوب از نظر داستان و کارگردانی و بازی <sep>
<s>نمونه دیدگاه منفی از نظر داستان و بازی و کارگردانی <sep>
<s>نمونه دیدگاه خوب از نظر داستان و بازی و موسیقی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و کارگردانی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کارگردانی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر کارگردانی و بازی و صحنه <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر کارگردانی و بازی <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر صحنه و فیلمبرداری و داستان <sep>
<s>نمونه دیدگاه خوب از نظر موسیقی و داستان <sep>
<s>نمونه دیدگاه منفی از نظر موسیقی و بازی و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر صدا و بازی <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و صحنه و فیلمبرداری <sep>
<s>نمونه دیدگاه خیلی منفی از نظر بازی و فیلمبرداری و داستان و کارگردانی <sep>
<s>نمونه دیدگاه خیلی منفی از نظر صحنه <sep>
<s>نمونه دیدگاه منفی از نظر داستان و صحنه <sep>
<s>نمونه دیدگاه منفی از نظر بازی و صحنه و صدا <sep>
<s>نمونه دیدگاه خیلی منفی از نظر فیلمبرداری و صدا <sep>
<s>نمونه دیدگاه خیلی خوب از نظر موسیقی <sep>
<s>نمونه دیدگاه خوب از نظر بازی و کارگردانی و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و فیلمبرداری و موسیقی و کارگردانی و داستان <sep>
<s>نمونه دیدگاه هم خوب هم بد از نظر فیلمبرداری و داستان و بازی <sep>
<s>نمونه دیدگاه منفی از نظر صحنه و فیلمبرداری و داستان <sep>
<s>نمونه دیدگاه خیلی خوب از نظر بازی و کارگردانی و داستان <sep>
```
## Questions?
Post a Github issue on the [ParsGPT2 Issues](https://github.com/hooshvare/parsgpt/issues) repo. | {"language": "fa", "license": "apache-2.0", "widget": [{"text": "<s>\u0646\u0645\u0648\u0646\u0647 \u062f\u06cc\u062f\u06af\u0627\u0647 \u0647\u0645 \u062e\u0648\u0628 \u0647\u0645 \u0628\u062f \u0628\u0647 \u0637\u0648\u0631 \u06a9\u0644\u06cc <sep>"}, {"text": "<s>\u0646\u0645\u0648\u0646\u0647 \u062f\u06cc\u062f\u06af\u0627\u0647 \u062e\u06cc\u0644\u06cc \u0645\u0646\u0641\u06cc \u0627\u0632 \u0646\u0638\u0631 \u06a9\u06cc\u0641\u06cc\u062a \u0648 \u0637\u0639\u0645 <sep>"}, {"text": "<s>\u0646\u0645\u0648\u0646\u0647 \u062f\u06cc\u062f\u06af\u0627\u0647 \u062e\u0648\u0628 \u0627\u0632 \u0646\u0638\u0631 \u0628\u0627\u0632\u06cc \u0648 \u06a9\u0627\u0631\u06af\u0631\u062f\u0627\u0646\u06cc <sep>"}, {"text": "<s>\u0646\u0645\u0648\u0646\u0647 \u062f\u06cc\u062f\u06af\u0627\u0647 \u062e\u06cc\u0644\u06cc \u062e\u0648\u0628 \u0627\u0632 \u0646\u0638\u0631 \u0628\u0627\u0632\u06cc \u0648 \u0635\u062d\u0646\u0647 \u0648 \u062f\u0627\u0633\u062a\u0627\u0646 <sep>"}, {"text": "<s>\u0646\u0645\u0648\u0646\u0647 \u062f\u06cc\u062f\u06af\u0627\u0647 \u062e\u06cc\u0644\u06cc \u0645\u0646\u0641\u06cc \u0627\u0632 \u0646\u0638\u0631 \u0627\u0631\u0632\u0634 \u062e\u0631\u06cc\u062f \u0648 \u0637\u0639\u0645 \u0648 \u06a9\u06cc\u0641\u06cc\u062a <sep>"}]} | HooshvareLab/gpt2-fa-comment | null | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #gpt2 #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Persian Comment Generator
The model can generate comments based on your aspects, and the model was fine-tuned on persiannlp/parsinlu. Currently, the model only supports aspects in the food and movie scope. You can see the whole aspects in the following section.
## Comments Aspects
## Questions?
Post a Github issue on the ParsGPT2 Issues repo. | [
"# Persian Comment Generator \n\nThe model can generate comments based on your aspects, and the model was fine-tuned on persiannlp/parsinlu. Currently, the model only supports aspects in the food and movie scope. You can see the whole aspects in the following section.",
"## Comments Aspects",
"## Questions?\nPost a Github issue on the ParsGPT2 Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Persian Comment Generator \n\nThe model can generate comments based on your aspects, and the model was fine-tuned on persiannlp/parsinlu. Currently, the model only supports aspects in the food and movie scope. You can see the whole aspects in the following section.",
"## Comments Aspects",
"## Questions?\nPost a Github issue on the ParsGPT2 Issues repo."
] | [
51,
55,
4,
20
] | [
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Persian Comment Generator \n\nThe model can generate comments based on your aspects, and the model was fine-tuned on persiannlp/parsinlu. Currently, the model only supports aspects in the food and movie scope. You can see the whole aspects in the following section.## Comments Aspects## Questions?\nPost a Github issue on the ParsGPT2 Issues repo."
] |
text-generation | transformers |
# Persian Poet GPT2
## Poets
The model can generate poetry based on your favorite poet, and you need to add one of the following lines as the input the box on the right side or follow the [fine-tuning notebook](https://colab.research.google.com/github/hooshvare/parsgpt/blob/master/notebooks/Persian_Poetry_FineTuning.ipynb).
```text
<s>رودکی<|startoftext|>
<s>فردوسی<|startoftext|>
<s>کسایی<|startoftext|>
<s>ناصرخسرو<|startoftext|>
<s>منوچهری<|startoftext|>
<s>فرخی سیستانی<|startoftext|>
<s>مسعود سعد سلمان<|startoftext|>
<s>ابوسعید ابوالخیر<|startoftext|>
<s>باباطاهر<|startoftext|>
<s>فخرالدین اسعد گرگانی<|startoftext|>
<s>اسدی توسی<|startoftext|>
<s>هجویری<|startoftext|>
<s>خیام<|startoftext|>
<s>نظامی<|startoftext|>
<s>عطار<|startoftext|>
<s>سنایی<|startoftext|>
<s>خاقانی<|startoftext|>
<s>انوری<|startoftext|>
<s>عبدالواسع جبلی<|startoftext|>
<s>نصرالله منشی<|startoftext|>
<s>مهستی گنجوی<|startoftext|>
<s>باباافضل کاشانی<|startoftext|>
<s>مولوی<|startoftext|>
<s>سعدی<|startoftext|>
<s>خواجوی کرمانی<|startoftext|>
<s>عراقی<|startoftext|>
<s>سیف فرغانی<|startoftext|>
<s>حافظ<|startoftext|>
<s>اوحدی<|startoftext|>
<s>شیخ محمود شبستری<|startoftext|>
<s>عبید زاکانی<|startoftext|>
<s>امیرخسرو دهلوی<|startoftext|>
<s>سلمان ساوجی<|startoftext|>
<s>شاه نعمتالله ولی<|startoftext|>
<s>جامی<|startoftext|>
<s>هلالی جغتایی<|startoftext|>
<s>وحشی<|startoftext|>
<s>محتشم کاشانی<|startoftext|>
<s>شیخ بهایی<|startoftext|>
<s>عرفی<|startoftext|>
<s>رضیالدین آرتیمانی<|startoftext|>
<s>صائب تبریزی<|startoftext|>
<s>فیض کاشانی<|startoftext|>
<s>بیدل دهلوی<|startoftext|>
<s>هاتف اصفهانی<|startoftext|>
<s>فروغی بسطامی<|startoftext|>
<s>قاآنی<|startoftext|>
<s>ملا هادی سبزواری<|startoftext|>
<s>پروین اعتصامی<|startoftext|>
<s>ملکالشعرای بهار<|startoftext|>
<s>شهریار<|startoftext|>
<s>رهی معیری<|startoftext|>
<s>اقبال لاهوری<|startoftext|>
<s>خلیلالله خلیلی<|startoftext|>
<s>شاطرعباس صبوحی<|startoftext|>
<s>نیما یوشیج ( آوای آزاد )<|startoftext|>
<s>احمد شاملو<|startoftext|>
<s>سهراب سپهری<|startoftext|>
<s>فروغ فرخزاد<|startoftext|>
<s>سیمین بهبهانی<|startoftext|>
<s>مهدی اخوان ثالث<|startoftext|>
<s>محمدحسن بارق شفیعی<|startoftext|>
<s>شیون فومنی<|startoftext|>
<s>کامبیز صدیقی کسمایی<|startoftext|>
<s>بهرام سالکی<|startoftext|>
<s>عبدالقهّار عاصی<|startoftext|>
<s>اِ لیـــار (جبار محمدی )<|startoftext|>
```
## Questions?
Post a Github issue on the [ParsGPT2 Issues](https://github.com/hooshvare/parsgpt/issues) repo. | {"language": "fa", "license": "apache-2.0", "widget": [{"text": "<s>\u0631\u0648\u062f\u06a9\u06cc<|startoftext|>"}, {"text": "<s>\u0641\u0631\u062f\u0648\u0633\u06cc<|startoftext|>"}, {"text": "<s>\u062e\u06cc\u0627\u0645<|startoftext|>"}, {"text": "<s>\u0639\u0637\u0627\u0631<|startoftext|>"}, {"text": "<s>\u0646\u0638\u0627\u0645\u06cc<|startoftext|>"}]} | HooshvareLab/gpt2-fa-poetry | null | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #gpt2 #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Persian Poet GPT2
## Poets
The model can generate poetry based on your favorite poet, and you need to add one of the following lines as the input the box on the right side or follow the fine-tuning notebook.
## Questions?
Post a Github issue on the ParsGPT2 Issues repo. | [
"# Persian Poet GPT2",
"## Poets\nThe model can generate poetry based on your favorite poet, and you need to add one of the following lines as the input the box on the right side or follow the fine-tuning notebook.",
"## Questions?\nPost a Github issue on the ParsGPT2 Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Persian Poet GPT2",
"## Poets\nThe model can generate poetry based on your favorite poet, and you need to add one of the following lines as the input the box on the right side or follow the fine-tuning notebook.",
"## Questions?\nPost a Github issue on the ParsGPT2 Issues repo."
] | [
51,
6,
41,
20
] | [
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Persian Poet GPT2## Poets\nThe model can generate poetry based on your favorite poet, and you need to add one of the following lines as the input the box on the right side or follow the fine-tuning notebook.## Questions?\nPost a Github issue on the ParsGPT2 Issues repo."
] |
text-generation | transformers |
# ParsGPT2
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ParsGPT2,
author = {Hooshvare Team},
title = {ParsGPT2 the Persian version of GPT2},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/hooshvare/parsgpt}},
}
```
## Questions?
Post a Github issue on the [ParsGPT2 Issues](https://github.com/hooshvare/parsgpt/issues) repo. | {"language": "fa", "license": "apache-2.0", "widget": [{"text": "\u062f\u0631 \u06cc\u06a9 \u0627\u062a\u0641\u0627\u0642 \u0634\u06af\u0641\u062a \u0627\u0646\u06af\u06cc\u0632\u060c \u067e\u0698\u0648\u0647\u0634\u06af\u0631\u0627\u0646"}, {"text": "\u06af\u0631\u0641\u062a\u06af\u06cc \u0628\u06cc\u0646\u06cc \u062f\u0631 \u06a9\u0648\u062f\u06a9\u0627\u0646 \u0648 \u0628\u0647\u200c\u062e\u0635\u0648\u0635 \u0646\u0648\u0632\u0627\u062f\u0627\u0646 \u0628\u0627\u0639\u062b \u0645\u06cc\u200c\u0634\u0648\u062f"}, {"text": "\u0627\u0645\u06cc\u062f\u0648\u0627\u0631\u06cc\u0645 \u0646\u0648\u0631\u0648\u0632 \u0627\u0645\u0633\u0627\u0644 \u0633\u0627\u0644\u06cc"}]} | HooshvareLab/gpt2-fa | null | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #gpt2 #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# ParsGPT2
### BibTeX entry and citation info
Please cite in publications as the following:
## Questions?
Post a Github issue on the ParsGPT2 Issues repo. | [
"# ParsGPT2",
"### BibTeX entry and citation info\n\nPlease cite in publications as the following:",
"## Questions?\nPost a Github issue on the ParsGPT2 Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# ParsGPT2",
"### BibTeX entry and citation info\n\nPlease cite in publications as the following:",
"## Questions?\nPost a Github issue on the ParsGPT2 Issues repo."
] | [
55,
5,
18,
20
] | [
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# ParsGPT2### BibTeX entry and citation info\n\nPlease cite in publications as the following:## Questions?\nPost a Github issue on the ParsGPT2 Issues repo."
] |
token-classification | transformers |
# RobertaNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Roberta | 0.994849 | 0.949816 | 0.960235 | 0.954997 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.844869 | 0.869779 | 0.857143 |
| EVE | 256 | 0.948148 | 1.000000 | 0.973384 |
| FAC | 248 | 0.957529 | 1.000000 | 0.978304 |
| LOC | 2884 | 0.965422 | 0.968100 | 0.966759 |
| MON | 98 | 0.937500 | 0.918367 | 0.927835 |
| ORG | 3216 | 0.943662 | 0.958333 | 0.950941 |
| PCT | 94 | 1.000000 | 0.968085 | 0.983784 |
| PER | 2646 | 0.957030 | 0.959562 | 0.958294 |
| PRO | 318 | 0.963636 | 1.000000 | 0.981481 |
| TIM | 43 | 0.739130 | 0.790698 | 0.764045 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/roberta-fa-zwnj-base-ner"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. | {"language": "fa"} | HooshvareLab/roberta-fa-zwnj-base-ner | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #roberta #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us
| RobertaNER
==========
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from ARMAN, PEYMA, and WikiANN that covered ten types of entities:
* Date (DAT)
* Event (EVE)
* Facility (FAC)
* Location (LOC)
* Money (MON)
* Organization (ORG)
* Percent (PCT)
* Person (PER)
* Product (PRO)
* Time (TIM)
Dataset Information
-------------------
Evaluation
----------
The following tables summarize the scores obtained by model overall and per each class.
Overall
Per entities
How To Use
----------
You use this model with Transformers pipeline for NER.
### Installing requirements
### How to predict using pipeline
Questions?
----------
Post a Github issue on the ParsNER Issues repo.
| [
"### Installing requirements",
"### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us \n",
"### Installing requirements",
"### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] | [
35,
5,
34
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #token-classification #fa #autotrain_compatible #endpoints_compatible #region-us \n### Installing requirements### How to predict using pipeline\n\n\nQuestions?\n----------\n\n\nPost a Github issue on the ParsNER Issues repo."
] |
fill-mask | transformers |
# Roberta
This model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.
## Questions?
Post a Github issue on the [ParsRoBERTa Issues](https://github.com/hooshvare/parsbert/issues) repo. | {"language": "fa", "license": "apache-2.0"} | HooshvareLab/roberta-fa-zwnj-base | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"fa"
] | TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Roberta
This model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.
## Questions?
Post a Github issue on the ParsRoBERTa Issues repo. | [
"# Roberta\nThis model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.",
"## Questions?\nPost a Github issue on the ParsRoBERTa Issues repo."
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Roberta\nThis model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.",
"## Questions?\nPost a Github issue on the ParsRoBERTa Issues repo."
] | [
43,
39,
20
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Roberta\nThis model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary.## Questions?\nPost a Github issue on the ParsRoBERTa Issues repo."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2275
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1909 | 1.0 | 1250 | 0.1717 | 0.9333 |
| 0.0932 | 2.0 | 2500 | 0.2275 | 0.9335 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy"], "model_index": [{"name": "roberta-base-bne-finetuned-amazon_reviews_multi", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9335}}]}]} | Hormigo/roberta-base-bne-finetuned-amazon_reviews_multi | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| roberta-base-bne-finetuned-amazon\_reviews\_multi
=================================================
This model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2275
* Accuracy: 0.9335
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
56,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# SpongeBob DialoGPT Model | {"tags": ["conversational"]} | Htenn/DialoGPT-small-spongebob | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SpongeBob DialoGPT Model | [
"# SpongeBob DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SpongeBob DialoGPT Model"
] | [
39,
8
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# SpongeBob DialoGPT Model"
] |
text-generation | transformers |
# SpongeBob DialoGPT Model | {"tags": ["conversational"]} | Htenn/DialoGPT-small-spongebobv2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# SpongeBob DialoGPT Model | [
"# SpongeBob DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# SpongeBob DialoGPT Model"
] | [
39,
8
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# SpongeBob DialoGPT Model"
] |
text-generation | transformers |
#Rick Sanchez DiaoloGPT Model | {"tags": ["conversational"]} | HueJanus/DialoGPT-small-ricksanchez | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Rick Sanchez DiaoloGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
token-classification | spacy | | Feature | Description |
| --- | --- |
| **Name** | `en_roberta_base_leetspeak_ner` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [roberta-base](https://huggingface.co/roberta-base) pre-trained model on English language using a masked language modeling (MLM) objective by Yinhan Liu et al. <br> [LeetSpeak-NER](https://huggingface.co/spaces/Huertas97/LeetSpeak-NER) app where this model is in production for countering information disorders|
| **License** | Apache 2.0 |
| **Author** | [Álvaro Huertas García](https://www.linkedin.com/in/alvaro-huertas-garcia/) at [AI+DA](http://aida.etsisi.upm.es/) |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `INV_CAMO`, `LEETSPEAK`, `MIX`, `PUNCT_CAMO` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 82.80 |
| `ENTS_P` | 79.66 |
| `ENTS_R` | 86.20 |
| `TRANSFORMER_LOSS` | 177808.42 |
| `NER_LOSS` | 608427.31 | | {"language": ["en"], "license": "apache-2.0", "tags": ["spacy", "token-classification"], "widget": [{"text": "But one other thing that we have to re;think is the way that we dy\u00a3 our #c!l.o|th?\u00a3+s.", "example_title": "Word camouflage detection"}]} | Huertas97/en_roberta_base_leetspeak_ner | null | [
"spacy",
"token-classification",
"en",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#spacy #token-classification #en #license-apache-2.0 #model-index #region-us
|
### Label Scheme
View label scheme (4 labels for 1 components)
### Accuracy
| [
"### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)",
"### Accuracy"
] | [
"TAGS\n#spacy #token-classification #en #license-apache-2.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)",
"### Accuracy"
] | [
26,
15,
4
] | [
"TAGS\n#spacy #token-classification #en #license-apache-2.0 #model-index #region-us \n### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)### Accuracy"
] |
token-classification | spacy | | Feature | Description |
| --- | --- |
| **Name** | `es_roberta_base_bne_leetspeak_ner` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model a transformer-based masked language model for the Spanish language pre-trained with a total of 570GB of clean and deduplicated text compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) <br> [LeetSpeak-NER](https://huggingface.co/spaces/Huertas97/LeetSpeak-NER) app where this model is in production for countering information disorders|
| **License** | Apache 2.0 |
| **Author** | [Álvaro Huertas García](https://www.linkedin.com/in/alvaro-huertas-garcia/) at [AI+DA](http://aida.etsisi.upm.es/) |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `INV_CAMO`, `LEETSPEAK`, `MIX`, `PUNCT_CAMO` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 91.82 |
| `ENTS_P` | 89.79 |
| `ENTS_R` | 93.94 |
| `TRANSFORMER_LOSS` | 166484.92 |
| `NER_LOSS` | 318457.35 | | {"language": ["es"], "license": "apache-2.0", "tags": ["spacy", "token-classification"], "widget": [{"text": "La C0v!d es un 3ng@\u00f1o de los G0b!3rno$", "example_title": "Word camouflage detection"}]} | Huertas97/es_roberta_base_bne_leetspeak_ner | null | [
"spacy",
"token-classification",
"es",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#spacy #token-classification #es #license-apache-2.0 #model-index #region-us
|
### Label Scheme
View label scheme (4 labels for 1 components)
### Accuracy
| [
"### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)",
"### Accuracy"
] | [
"TAGS\n#spacy #token-classification #es #license-apache-2.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)",
"### Accuracy"
] | [
26,
15,
4
] | [
"TAGS\n#spacy #token-classification #es #license-apache-2.0 #model-index #region-us \n### Label Scheme\n\n\n\nView label scheme (4 labels for 1 components)### Accuracy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.