pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fill-mask | transformers |
# roberta-large-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-large-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-japanese-aozora")
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]} | KoichiYasuoka/roberta-large-japanese-aozora | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-large-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune 'roberta-large-japanese-aozora' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.
## How to Use
## Reference
安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
| [
"# roberta-large-japanese-aozora",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune 'roberta-large-japanese-aozora' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.",
"## How to Use",
"## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8."
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-large-japanese-aozora",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune 'roberta-large-japanese-aozora' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.",
"## How to Use",
"## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8."
] |
token-classification | transformers |
# roberta-large-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-large-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-char-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-japanese-char-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-large-japanese-char-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-large-japanese-char-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-large-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-large-japanese-aozora-char. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.
## How to Use
or
## Reference
安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| [
"# roberta-large-japanese-char-luw-upos",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-large-japanese-aozora-char. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.",
"## How to Use\n\n\n\nor",
"## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] | [
"TAGS\n#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-large-japanese-char-luw-upos",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-large-japanese-aozora-char. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.",
"## How to Use\n\n\n\nor",
"## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] |
token-classification | transformers |
# roberta-large-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-large-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-large-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-japanese-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-large-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## Reference
安岡孝一: [Transformersと国語研長単位による日本語係り受け解析モデルの製作](http://id.nii.ac.jp/1001/00216223/), 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-large-japanese-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-large-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-large-japanese-aozora. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).
## How to Use
or
## Reference
安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.
## See Also
esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| [
"# roberta-large-japanese-luw-upos",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-large-japanese-aozora. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).",
"## How to Use\n\n\n\nor",
"## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] | [
"TAGS\n#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-large-japanese-luw-upos",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-large-japanese-aozora. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).",
"## How to Use\n\n\n\nor",
"## Reference\n\n安岡孝一: Transformersと国語研長単位による日本語係り受け解析モデルの製作, 情報処理学会研究報告, Vol.2022-CH-128, No.7 (2022年2月), pp.1-8.",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] |
fill-mask | transformers |
# roberta-small-japanese-aozora-char
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune `roberta-small-japanese-aozora-char` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-char-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora-char")
```
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]} | KoichiYasuoka/roberta-small-japanese-aozora-char | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-small-japanese-aozora-char
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune 'roberta-small-japanese-aozora-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.
## How to Use
| [
"# roberta-small-japanese-aozora-char",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune 'roberta-small-japanese-aozora-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.",
"## How to Use"
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-small-japanese-aozora-char",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with character tokenizer. You can fine-tune 'roberta-small-japanese-aozora-char' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.",
"## How to Use"
] |
fill-mask | transformers |
# roberta-small-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-small-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
```
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "masked-lm"], "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "\u65e5\u672c\u306b\u7740\u3044\u305f\u3089[MASK]\u3092\u8a2a\u306d\u306a\u3055\u3044\u3002"}]} | KoichiYasuoka/roberta-small-japanese-aozora | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-small-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune 'roberta-small-japanese-aozora' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.
## How to Use
| [
"# roberta-small-japanese-aozora",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune 'roberta-small-japanese-aozora' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.",
"## How to Use"
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #japanese #masked-lm #ja #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-small-japanese-aozora",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts with Japanese-LUW-Tokenizer. You can fine-tune 'roberta-small-japanese-aozora' for downstream tasks, such as POS-tagging, dependency-parsing, and so on.",
"## How to Use"
] |
token-classification | transformers |
# roberta-small-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-char-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-char-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-char-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-small-japanese-char-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-small-japanese-char-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-small-japanese-aozora-char. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).
## How to Use
or
## See Also
esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| [
"# roberta-small-japanese-char-luw-upos",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-small-japanese-aozora-char. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).",
"## How to Use\n\n\n\nor",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] | [
"TAGS\n#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-small-japanese-char-luw-upos",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-small-japanese-aozora-char. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).",
"## How to Use\n\n\n\nor",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] |
token-classification | transformers |
# roberta-small-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["ja"], "license": "cc-by-sa-4.0", "tags": ["japanese", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification", "widget": [{"text": "\u56fd\u5883\u306e\u9577\u3044\u30c8\u30f3\u30cd\u30eb\u3092\u629c\u3051\u308b\u3068\u96ea\u56fd\u3067\u3042\u3063\u305f\u3002"}]} | KoichiYasuoka/roberta-small-japanese-luw-upos | null | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"japanese",
"pos",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ja"
] | TAGS
#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-small-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-small-japanese-aozora. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).
## How to Use
or
## See Also
esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| [
"# roberta-small-japanese-luw-upos",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-small-japanese-aozora. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).",
"## How to Use\n\n\n\nor",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] | [
"TAGS\n#transformers #pytorch #roberta #token-classification #japanese #pos #dependency-parsing #ja #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-small-japanese-luw-upos",
"## Model Description\n\nThis is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from roberta-small-japanese-aozora. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech).",
"## How to Use\n\n\n\nor",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] |
token-classification | transformers |
# xlm-roberta-base-english-upos
## Model Description
This is an XLM-RoBERTa model pre-trained with [UD_English-EWT](https://github.com/UniversalDependencies/UD_English-EWT) for POS-tagging and dependency-parsing, derived from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/xlm-roberta-base-english-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| {"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["english", "token-classification", "pos", "dependency-parsing"], "datasets": ["universal_dependencies"], "pipeline_tag": "token-classification"} | KoichiYasuoka/xlm-roberta-base-english-upos | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"english",
"pos",
"dependency-parsing",
"en",
"dataset:universal_dependencies",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #xlm-roberta #token-classification #english #pos #dependency-parsing #en #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-roberta-base-english-upos
## Model Description
This is an XLM-RoBERTa model pre-trained with UD_English-EWT for POS-tagging and dependency-parsing, derived from xlm-roberta-base. Every word is tagged by UPOS (Universal Part-Of-Speech).
## How to Use
or
## See Also
esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
| [
"# xlm-roberta-base-english-upos",
"## Model Description\n\nThis is an XLM-RoBERTa model pre-trained with UD_English-EWT for POS-tagging and dependency-parsing, derived from xlm-roberta-base. Every word is tagged by UPOS (Universal Part-Of-Speech).",
"## How to Use\n\n\n\nor",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] | [
"TAGS\n#transformers #pytorch #xlm-roberta #token-classification #english #pos #dependency-parsing #en #dataset-universal_dependencies #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-roberta-base-english-upos",
"## Model Description\n\nThis is an XLM-RoBERTa model pre-trained with UD_English-EWT for POS-tagging and dependency-parsing, derived from xlm-roberta-base. Every word is tagged by UPOS (Universal Part-Of-Speech).",
"## How to Use\n\n\n\nor",
"## See Also\n\nesupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models"
] |
text-generation | null | #Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Konggate/DialoGPT-small-harrypotter | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
| #Harry Potter DialoGPT Model | [] | [
"TAGS\n#conversational #region-us \n"
] |
fill-mask | transformers |
# Α lite RoBERTa fill mask model trained mostly in greek tweets
The training dataset of this model consists of 23 million tweets in Greek, of approximately 5000 users in total, spanning from 2008 to 2018.
The model has been trained to support the work for the paper [Multimodal Hate Speech Detection in Greek Social Media](https://www.mdpi.com/2414-4088/5/7/34)
## Load the pretrained model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Konstantinos/BERTaTweetGR")
model = AutoModel.from_pretrained("Konstantinos/BERTaTweetGR")
```
| {"language": "el", "widget": [{"text": "\u03bc\u03c0\u03b1\u03b9\u03bd\u03c9 \u03c3\u03c4\u03bf <mask> \u03ba\u03b1\u03b9 \u03c4\u03b9 \u03bd\u03b1 \u03b4\u03c9."}]} | Konstantinos/BERTaTweetGR | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"el",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"el"
] | TAGS
#transformers #pytorch #jax #roberta #fill-mask #el #autotrain_compatible #endpoints_compatible #region-us
|
# Α lite RoBERTa fill mask model trained mostly in greek tweets
The training dataset of this model consists of 23 million tweets in Greek, of approximately 5000 users in total, spanning from 2008 to 2018.
The model has been trained to support the work for the paper Multimodal Hate Speech Detection in Greek Social Media
## Load the pretrained model
| [
"# Α lite RoBERTa fill mask model trained mostly in greek tweets\n\n\nThe training dataset of this model consists of 23 million tweets in Greek, of approximately 5000 users in total, spanning from 2008 to 2018.\nThe model has been trained to support the work for the paper Multimodal Hate Speech Detection in Greek Social Media",
"## Load the pretrained model"
] | [
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #el #autotrain_compatible #endpoints_compatible #region-us \n",
"# Α lite RoBERTa fill mask model trained mostly in greek tweets\n\n\nThe training dataset of this model consists of 23 million tweets in Greek, of approximately 5000 users in total, spanning from 2008 to 2018.\nThe model has been trained to support the work for the paper Multimodal Hate Speech Detection in Greek Social Media",
"## Load the pretrained model"
] |
null | null | from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelForCausalLM.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") | {} | Kookly/Kooklybots | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelForCausalLM.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") | [] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers |
I'm dumb | {"tags": ["conversational"]} | Koriyy/DialoGPT-medium-gf | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
I'm dumb | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Rick and Morty DialoGPT Model | {"tags": ["conversational"]} | Koro/DialoGPT-medium-rickandmorty | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick and Morty DialoGPT Model | [
"# Rick and Morty DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick and Morty DialoGPT Model"
] |
text-generation | null |
# Rick and Morty DialoGPT Model | {"tags": ["conversational"]} | Koro/DialoGPT-small-rickandmorty | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
|
# Rick and Morty DialoGPT Model | [
"# Rick and Morty DialoGPT Model"
] | [
"TAGS\n#conversational #region-us \n",
"# Rick and Morty DialoGPT Model"
] |
fill-mask | transformers | # Bangla BERT Base
Here we published a pretrained Bangla bert language model as **bangla-bert**! which is now available in huggingface model hub.
Here we described [bangla-bert](https://github.com/Kowsher/bert-base-bangla) which is a pretrained Bangla language model based on mask language modeling described in [BERT](https://arxiv.org/abs/1810.04805) and the GitHub [repository](https://github.com/google-research/bert)
## Corpus Details
We trained the Bangla bert language model using BanglaLM dataset from kaggle [BanglaLM](https://www.kaggle.com/gakowsher/bangla-language-model-dataset). There is 3 version of dataset which is almost 40GB.
After downloading the dataset, we went on the way to mask LM.
**bangla-bert Tokenizer**
```py
from transformers import AutoTokenizer, AutoModel
bnbert_tokenizer = AutoTokenizer.from_pretrained("Kowsher/bangla-bert")
text = "খাঁটি সোনার চাইতে খাঁটি আমার দেশের মাটি"
bnbert_tokenizer.tokenize(text)
# output: ['খাটি', 'সে', '##ানার', 'চাইতে', 'খাটি', 'আমার', 'দেশের', 'মাটি']
```
**MASK Generation**
here, we can use bert base bangla model as for masked language modeling:
```py
from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained("Kowsher/bangla-bert")
tokenizer = BertTokenizer.from_pretrained("Kowsher/bangla-bert")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"আমি বাংলার গান {nlp.tokenizer.mask_token}"):
print(pred)
# {'sequence': 'আমি বাংলার গান লিখি', 'score': 0.17955434322357178, 'token': 24749, 'token_str': 'লিখি'}
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"তুই রাজাকার তুই {nlp.tokenizer.mask_token}"):
print(pred)
# {'sequence': 'তুই রাজাকার তুই রাজাকার', 'score': 0.9975168704986572, 'token': 13401, 'token_str': 'রাজাকার'}
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"বাংলা আমার {nlp.tokenizer.mask_token}"):
print(pred)
# {'sequence': 'বাংলা আমার অহংকার', 'score': 0.5679506063461304, 'token': 19009, 'token_str': 'অহংকার'}
```
**Cite this work**
M. Kowsher, A. A. Sami, N. J. Prottasha, M. S. Arefin, P. K. Dhar and T. Koshiba, "Bangla-BERT: Transformer-based Efficient Model for Transfer Learning and Language Understanding," in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3197662.
## Author
[Kowsher](http://kowsher.org/)
| {"language": "bn", "tags": ["Bert base Bangla", "Bengali Bert", "Bengali lm", "Bangla Base Bert", "Bangla Bert language model", "Bangla Bert"], "datasets": ["BanglaLM dataset"]} | Kowsher/bangla-bert | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"Bert base Bangla",
"Bengali Bert",
"Bengali lm",
"Bangla Base Bert",
"Bangla Bert language model",
"Bangla Bert",
"bn",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [
"bn"
] | TAGS
#transformers #pytorch #bert #fill-mask #Bert base Bangla #Bengali Bert #Bengali lm #Bangla Base Bert #Bangla Bert language model #Bangla Bert #bn #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #region-us
| # Bangla BERT Base
Here we published a pretrained Bangla bert language model as bangla-bert! which is now available in huggingface model hub.
Here we described bangla-bert which is a pretrained Bangla language model based on mask language modeling described in BERT and the GitHub repository
## Corpus Details
We trained the Bangla bert language model using BanglaLM dataset from kaggle BanglaLM. There is 3 version of dataset which is almost 40GB.
After downloading the dataset, we went on the way to mask LM.
bangla-bert Tokenizer
MASK Generation
here, we can use bert base bangla model as for masked language modeling:
Cite this work
M. Kowsher, A. A. Sami, N. J. Prottasha, M. S. Arefin, P. K. Dhar and T. Koshiba, "Bangla-BERT: Transformer-based Efficient Model for Transfer Learning and Language Understanding," in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3197662.
## Author
Kowsher
| [
"# Bangla BERT Base\nHere we published a pretrained Bangla bert language model as bangla-bert! which is now available in huggingface model hub. \nHere we described bangla-bert which is a pretrained Bangla language model based on mask language modeling described in BERT and the GitHub repository",
"## Corpus Details\nWe trained the Bangla bert language model using BanglaLM dataset from kaggle BanglaLM. There is 3 version of dataset which is almost 40GB.\nAfter downloading the dataset, we went on the way to mask LM.\n\n\nbangla-bert Tokenizer\n\n\nMASK Generation\nhere, we can use bert base bangla model as for masked language modeling:\n\n\nCite this work\nM. Kowsher, A. A. Sami, N. J. Prottasha, M. S. Arefin, P. K. Dhar and T. Koshiba, \"Bangla-BERT: Transformer-based Efficient Model for Transfer Learning and Language Understanding,\" in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3197662.",
"## Author\nKowsher"
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #Bert base Bangla #Bengali Bert #Bengali lm #Bangla Base Bert #Bangla Bert language model #Bangla Bert #bn #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Bangla BERT Base\nHere we published a pretrained Bangla bert language model as bangla-bert! which is now available in huggingface model hub. \nHere we described bangla-bert which is a pretrained Bangla language model based on mask language modeling described in BERT and the GitHub repository",
"## Corpus Details\nWe trained the Bangla bert language model using BanglaLM dataset from kaggle BanglaLM. There is 3 version of dataset which is almost 40GB.\nAfter downloading the dataset, we went on the way to mask LM.\n\n\nbangla-bert Tokenizer\n\n\nMASK Generation\nhere, we can use bert base bangla model as for masked language modeling:\n\n\nCite this work\nM. Kowsher, A. A. Sami, N. J. Prottasha, M. S. Arefin, P. K. Dhar and T. Koshiba, \"Bangla-BERT: Transformer-based Efficient Model for Transfer Learning and Language Understanding,\" in IEEE Access, 2022, doi: 10.1109/ACCESS.2022.3197662.",
"## Author\nKowsher"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9005
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.108 | 1.0 | 235 | 0.9801 | 0.5610 |
| 0.9592 | 2.0 | 470 | 0.9005 | 0.5 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]} | Krassy/xlm-roberta-base-finetuned-marc-en | null | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
| xlm-roberta-base-finetuned-marc-en
==================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9005
* Mae: 0.5
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Santa Chatbot | {"tags": ["conversational"]} | KringleClaus/Dialog-santa | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Santa Chatbot | [
"# Santa Chatbot"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Santa Chatbot"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-plot
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-plot", "results": []}]} | KrishParikh/gpt2_imdb_movie_plots | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# gpt2-plot
This model is a fine-tuned version of gpt2-medium on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| [
"# gpt2-plot\n\nThis model is a fine-tuned version of gpt2-medium on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.8856",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.9.0\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# gpt2-plot\n\nThis model is a fine-tuned version of gpt2-medium on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.8856",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.9.0\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
null | null | ---
tags:
- conversational
--- | {} | KrishnaChandra4/DialoGPT-small-Rick | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| ---
tags:
- conversational
--- | [] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers |
# Harry Potter DialoGPTModel | {"tags": ["conversational"]} | KrispyIChris/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPTModel | [
"# Harry Potter DialoGPTModel"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPTModel"
] |
text-generation | transformers | # Buro discord bot | {"tags": ["conversational"]} | Kryptone/Burobot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Buro discord bot | [
"# Buro discord bot"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Buro discord bot"
] |
text-generation | transformers | # Rin chatbot | {"tags": ["conversational"]} | Kryptone/RinAI | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Rin chatbot | [
"# Rin chatbot"
] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rin chatbot"
] |
text-generation | transformers |
# MoniKA unstable | {"tags": ["conversational"]} | Kryptone/monikAI-Unstable | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# MoniKA unstable | [
"# MoniKA unstable"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# MoniKA unstable"
] |
text-generation | transformers | # Monika Discord Chatbot | {"tags": ["conversational"]} | Kryptone/monikAI | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Monika Discord Chatbot | [
"# Monika Discord Chatbot"
] | [
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Monika Discord Chatbot"
] |
text2text-generation | transformers | ## mDialBART: A Cross-Lingual Dialogue Summarization Model
This model is introduced by [*ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*](https://arxiv.org/abs/2202.05599). | {"license": "cc-by-nc-sa-4.0"} | Krystalan/mdialbart_de | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2202.05599",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.05599"
] | [] | TAGS
#transformers #pytorch #mbart #text2text-generation #arxiv-2202.05599 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
| ## mDialBART: A Cross-Lingual Dialogue Summarization Model
This model is introduced by *ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*. | [
"## mDialBART: A Cross-Lingual Dialogue Summarization Model\r\nThis model is introduced by *ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*."
] | [
"TAGS\n#transformers #pytorch #mbart #text2text-generation #arxiv-2202.05599 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## mDialBART: A Cross-Lingual Dialogue Summarization Model\r\nThis model is introduced by *ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*."
] |
text2text-generation | transformers | ## mDialBART: A Cross-Lingual Dialogue Summarization Model
This model is introduced by [*ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*](https://arxiv.org/abs/2202.05599). | {"license": "cc-by-nc-sa-4.0"} | Krystalan/mdialbart_zh | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"arxiv:2202.05599",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2202.05599"
] | [] | TAGS
#transformers #pytorch #mbart #text2text-generation #arxiv-2202.05599 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
| ## mDialBART: A Cross-Lingual Dialogue Summarization Model
This model is introduced by *ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*. | [
"## mDialBART: A Cross-Lingual Dialogue Summarization Model\r\nThis model is introduced by *ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*."
] | [
"TAGS\n#transformers #pytorch #mbart #text2text-generation #arxiv-2202.05599 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## mDialBART: A Cross-Lingual Dialogue Summarization Model\r\nThis model is introduced by *ClidSum: A Benchmark Dataset for Cross-Lingual Dialogue Summarization*."
] |
text-generation | transformers |
# Rick Sanchez DialoGPT Model | {"tags": ["conversational"]} | Kshaunish/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Sanchez DialoGPT Model | [
"# Rick Sanchez DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Sanchez DialoGPT Model"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7758
- Matthews Correlation: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.1926 | 1.0 | 535 | 0.7758 | 0.5259 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5258663312307151, "name": "Matthews Correlation"}]}]}]} | Kumicho/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7758
* Matthews Correlation: 0.5259
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# librispeech-100h-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0955
- Wer: 0.0345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.8277 | 0.42 | 500 | 2.9071 | 1.0 |
| 2.0261 | 0.84 | 1000 | 0.3060 | 0.2496 |
| 0.2181 | 1.26 | 1500 | 0.1172 | 0.0873 |
| 0.1255 | 1.68 | 2000 | 0.0894 | 0.0637 |
| 0.0971 | 2.1 | 2500 | 0.0821 | 0.0560 |
| 0.078 | 2.52 | 3000 | 0.0751 | 0.0500 |
| 0.0706 | 2.94 | 3500 | 0.0721 | 0.0456 |
| 0.0609 | 3.36 | 4000 | 0.0755 | 0.0464 |
| 0.0572 | 3.78 | 4500 | 0.0705 | 0.0431 |
| 0.0528 | 4.2 | 5000 | 0.0715 | 0.0423 |
| 0.0481 | 4.62 | 5500 | 0.0691 | 0.0403 |
| 0.0471 | 5.04 | 6000 | 0.0743 | 0.0401 |
| 0.0412 | 5.46 | 6500 | 0.0757 | 0.0399 |
| 0.0416 | 5.88 | 7000 | 0.0688 | 0.0378 |
| 0.0391 | 6.3 | 7500 | 0.0704 | 0.0383 |
| 0.0367 | 6.72 | 8000 | 0.0742 | 0.0387 |
| 0.0349 | 7.14 | 8500 | 0.0732 | 0.0388 |
| 0.033 | 7.56 | 9000 | 0.0719 | 0.0374 |
| 0.0327 | 7.98 | 9500 | 0.0750 | 0.0369 |
| 0.0292 | 8.4 | 10000 | 0.0734 | 0.0368 |
| 0.0303 | 8.82 | 10500 | 0.0733 | 0.0365 |
| 0.0283 | 9.24 | 11000 | 0.0766 | 0.0357 |
| 0.0269 | 9.66 | 11500 | 0.0761 | 0.0350 |
| 0.0268 | 10.08 | 12000 | 0.0802 | 0.0359 |
| 0.0245 | 10.42 | 12500 | 0.0758 | 0.0354 |
| 0.023 | 10.84 | 13000 | 0.0775 | 0.0349 |
| 0.0186 | 11.26 | 13500 | 0.0817 | 0.0355 |
| 0.0176 | 11.68 | 14000 | 0.0853 | 0.0354 |
| 0.0163 | 12.1 | 14500 | 0.0880 | 0.0347 |
| 0.0156 | 12.52 | 15000 | 0.0864 | 0.0357 |
| 0.0141 | 12.94 | 15500 | 0.0897 | 0.0355 |
| 0.0134 | 13.36 | 16000 | 0.0915 | 0.0349 |
| 0.013 | 13.78 | 16500 | 0.0928 | 0.0350 |
| 0.0097 | 13.42 | 17000 | 0.0955 | 0.0345 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "librispeech-100h-supervised", "results": []}]} | Kuray107/librispeech-100h-supervised | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| librispeech-100h-supervised
===========================
This model is a fine-tuned version of facebook/wav2vec2-large-lv60 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0955
* Wer: 0.0345
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 24
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.2
* Datasets 1.18.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# timit-5percent-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6615
- Wer: 0.2788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.3773 | 33.33 | 500 | 2.9693 | 1.0 |
| 1.4746 | 66.67 | 1000 | 0.5050 | 0.3359 |
| 0.1067 | 100.0 | 1500 | 0.5981 | 0.3054 |
| 0.0388 | 133.33 | 2000 | 0.6192 | 0.2712 |
| 0.0244 | 166.67 | 2500 | 0.6392 | 0.2776 |
| 0.018 | 200.0 | 3000 | 0.6615 | 0.2788 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "timit-5percent-supervised", "results": []}]} | Kuray107/timit-5percent-supervised | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| timit-5percent-supervised
=========================
This model is a fine-tuned version of facebook/wav2vec2-large-lv60 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6615
* Wer: 0.2788
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 200
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.2
* Datasets 1.18.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 200\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# timit-supervised
This model is a fine-tuned version of [Experiments/single_dataset/timit-supervised/checkpoint-3500](https://huggingface.co/Experiments/single_dataset/timit-supervised/checkpoint-3500) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1272
- Wer: 0.0532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0554 | 1.77 | 500 | 0.1310 | 0.0697 |
| 0.0509 | 3.53 | 1000 | 0.1497 | 0.0710 |
| 0.038 | 5.3 | 1500 | 0.1190 | 0.0659 |
| 0.0328 | 7.07 | 2000 | 0.0926 | 0.0596 |
| 0.0247 | 8.83 | 2500 | 0.0873 | 0.0570 |
| 0.0229 | 10.6 | 3000 | 0.0890 | 0.0532 |
| 0.0183 | 12.37 | 3500 | 0.0969 | 0.0532 |
| 0.0326 | 14.13 | 4000 | 0.0809 | 0.0469 |
| 0.03 | 15.9 | 4500 | 0.0758 | 0.0444 |
| 0.0264 | 17.67 | 5000 | 0.0973 | 0.0520 |
| 0.0244 | 19.43 | 5500 | 0.1272 | 0.0532 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "timit-supervised", "results": []}]} | Kuray107/timit-supervised | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #endpoints_compatible #region-us
| timit-supervised
================
This model is a fine-tuned version of Experiments/single\_dataset/timit-supervised/checkpoint-3500 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1272
* Wer: 0.0532
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.2
* Datasets 1.18.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wsj0-full-supervised
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Wer: 0.0343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.517 | 0.86 | 500 | 2.9475 | 1.0 |
| 2.2387 | 1.72 | 1000 | 0.4004 | 0.3498 |
| 0.3081 | 2.57 | 1500 | 0.1362 | 0.1159 |
| 0.1744 | 3.43 | 2000 | 0.1125 | 0.0929 |
| 0.1285 | 4.29 | 2500 | 0.0894 | 0.0727 |
| 0.1015 | 5.15 | 3000 | 0.0852 | 0.0642 |
| 0.0811 | 6.0 | 3500 | 0.0789 | 0.0614 |
| 0.0748 | 6.86 | 4000 | 0.0746 | 0.0529 |
| 0.0639 | 7.72 | 4500 | 0.0714 | 0.0481 |
| 0.0606 | 8.58 | 5000 | 0.0698 | 0.0489 |
| 0.0525 | 9.43 | 5500 | 0.0747 | 0.0464 |
| 0.0489 | 10.29 | 6000 | 0.0594 | 0.0396 |
| 0.0419 | 11.15 | 6500 | 0.0600 | 0.0359 |
| 0.0414 | 12.01 | 7000 | 0.0612 | 0.0412 |
| 0.0383 | 12.86 | 7500 | 0.0676 | 0.0392 |
| 0.0352 | 13.72 | 8000 | 0.0626 | 0.0388 |
| 0.034 | 14.58 | 8500 | 0.0699 | 0.0372 |
| 0.0309 | 15.44 | 9000 | 0.0807 | 0.0420 |
| 0.0295 | 16.3 | 9500 | 0.0796 | 0.0396 |
| 0.0273 | 17.15 | 10000 | 0.0716 | 0.0376 |
| 0.0271 | 18.01 | 10500 | 0.0657 | 0.0384 |
| 0.0251 | 18.87 | 11000 | 0.0585 | 0.0351 |
| 0.024 | 19.73 | 11500 | 0.0557 | 0.0347 |
| 0.0252 | 20.58 | 12000 | 0.0609 | 0.0327 |
| 0.0231 | 21.44 | 12500 | 0.0720 | 0.0368 |
| 0.0202 | 22.3 | 13000 | 0.0625 | 0.0343 |
| 0.0195 | 23.16 | 13500 | 0.0635 | 0.0372 |
| 0.0201 | 24.01 | 14000 | 0.0582 | 0.0335 |
| 0.0183 | 24.87 | 14500 | 0.0562 | 0.0343 |
| 0.0183 | 25.73 | 15000 | 0.0629 | 0.0335 |
| 0.0175 | 26.59 | 15500 | 0.0593 | 0.0323 |
| 0.017 | 27.44 | 16000 | 0.0631 | 0.0339 |
| 0.0162 | 28.3 | 16500 | 0.0597 | 0.0335 |
| 0.0169 | 29.16 | 17000 | 0.0623 | 0.0343 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.2
- Datasets 1.18.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wsj0-full-supervised", "results": []}]} | Kuray107/wsj0-full-supervised | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wsj0-full-supervised
====================
This model is a fine-tuned version of facebook/wav2vec2-large-lv60 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0623
* Wer: 0.0343
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 12
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.2
* Datasets 1.18.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.2\n* Datasets 1.18.2\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Kush/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
feature-extraction | transformers | This is **KOREAN** Bert Masked LM pretrained model adapted in **BEAUTY** domain. (BertForMaskedLM)
About 60,000 reviews were used.
It was fine-tuned based on _beomi/kcbert-base_ model weights.
Enjoy! | {} | Kyoungmin/beauty-base-KLCP | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
| This is KOREAN Bert Masked LM pretrained model adapted in BEAUTY domain. (BertForMaskedLM)
About 60,000 reviews were used.
It was fine-tuned based on _beomi/kcbert-base_ model weights.
Enjoy! | [] | [
"TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | **Second** BertForMaskedLM pretrained model in **KOREAN Beauty** domain.
About 120,000 reviews were used.
It was trained based on _beomi/kcbert-base_ .
Check out _Kyoungmin/beauty-base-KLCP_ for smaller model !! | {} | Kyoungmin/beauty-base-KLCP2 | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| Second BertForMaskedLM pretrained model in KOREAN Beauty domain.
About 120,000 reviews were used.
It was trained based on _beomi/kcbert-base_ .
Check out _Kyoungmin/beauty-base-KLCP_ for smaller model !! | [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null | No use | {} | Kyoungmin/beauty-word2vec | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| No use | [] | [
"TAGS\n#region-us \n"
] |
fill-mask | transformers | This is practice model for kcbert-base with Korean petition data! | {} | Kyoungmin/kcbert-base-petition | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| This is practice model for kcbert-base with Korean petition data! | [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
#VADER DialogGPT Model | {"tags": ["conversational"]} | LARACHNIDE/DialogGPT-small-sw | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#VADER DialogGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
multiple-choice | transformers |
# Roberta Large Fine Tuned on RACE
## Model description
This model follows the implementation by Allen AI team about [Aristo Roberta V7 Model](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0) given in [ARC Challenge](https://leaderboard.allenai.org/arc/submissions/public)
#### How to use
```python
import datasets
from transformers import RobertaTokenizer
from transformers import RobertaForMultipleChoice
tokenizer = RobertaTokenizer.from_pretrained(
"LIAMF-USP/aristo-roberta")
model = RobertaForMultipleChoice.from_pretrained(
"LIAMF-USP/aristo-roberta")
dataset = datasets.load_dataset(
"arc",,
split=["train", "validation", "test"],
)
training_examples = dataset[0]
evaluation_examples = dataset[1]
test_examples = dataset[2]
example=training_examples[0]
example_id = example["example_id"]
question = example["question"]
label_example = example["answer"]
options = example["options"]
if label_example in ["A", "B", "C", "D", "E"]:
label_map = {label: i for i, label in enumerate(
["A", "B", "C", "D", "E"])}
elif label_example in ["1", "2", "3", "4", "5"]:
label_map = {label: i for i, label in enumerate(
["1", "2", "3", "4", "5"])}
else:
print(f"{label_example} not found")
while len(options) < 5:
empty_option = {}
empty_option['option_context'] = ''
empty_option['option_text'] = ''
options.append(empty_option)
choices_inputs = []
for ending_idx, option in enumerate(options):
ending = option["option_text"]
context = option["option_context"]
if question.find("_") != -1:
# fill in the banks questions
question_option = question.replace("_", ending)
else:
question_option = question + " " + ending
inputs = tokenizer(
context,
question_option,
add_special_tokens=True,
max_length=MAX_SEQ_LENGTH,
padding="max_length",
truncation=True,
return_overflowing_tokens=False,
)
if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0:
logging.warning(f"Question: {example_id} with option {ending_idx} was truncated")
choices_inputs.append(inputs)
label = label_map[label_example]
input_ids = [x["input_ids"] for x in choices_inputs]
attention_mask = (
[x["attention_mask"] for x in choices_inputs]
# as the senteces follow the same structure, just one of them is
# necessary to check
if "attention_mask" in choices_inputs[0]
else None
)
example_encoded = {
"example_id": example_id,
"input_ids": input_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
"label": label
}
output = model(**example_encoded)
```
## Training data
the Training data was the same as proposed [here](https://leaderboard.allenai.org/arc/submission/blcotvl7rrltlue6bsv0)
The only diferrence was the hypeparameters of RACE fine tuned model, which were reported [here](https://huggingface.co/LIAMF-USP/roberta-large-finetuned-race#eval-results)
## Training procedure
It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following:
| Hyperparameter | Value |
|:----:|:----:|
| adam_beta1 | 0.9 |
| adam_beta2 | 0.98 |
| adam_epsilon | 1.000e-8 |
| eval_batch_size | 16 |
| train_batch_size | 4 |
| fp16 | True |
| gradient_accumulation_steps | 4 |
| learning_rate | 0.00001 |
| warmup_steps | 0.06 |
| max_length | 256 |
| epochs | 4 |
The other parameters were the default ones from [Trainer](https://huggingface.co/transformers/main_classes/trainer.html) and [Trainer Arguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments)
## Eval results:
| Dataset Acc | Challenge Test |
|:----:|:----:|
| | 65.358 |
**The model was trained with a TITAN RTX**
| {"language": "english", "license": "mit", "datasets": ["race", "ai2_arc", "openbookqa"], "metrics": ["accuracy"]} | LIAMF-USP/aristo-roberta | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"multiple-choice",
"dataset:race",
"dataset:ai2_arc",
"dataset:openbookqa",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"english"
] | TAGS
#transformers #pytorch #tf #jax #roberta #multiple-choice #dataset-race #dataset-ai2_arc #dataset-openbookqa #license-mit #endpoints_compatible #region-us
| Roberta Large Fine Tuned on RACE
================================
Model description
-----------------
This model follows the implementation by Allen AI team about Aristo Roberta V7 Model given in ARC Challenge
#### How to use
Training data
-------------
the Training data was the same as proposed here
The only diferrence was the hypeparameters of RACE fine tuned model, which were reported here
Training procedure
------------------
It was necessary to preprocess the data with a method that is exemplified for a single instance in the *How to use* section. The used hyperparameters were the following:
The other parameters were the default ones from Trainer and Trainer Arguments
Eval results:
-------------
The model was trained with a TITAN RTX
| [
"#### How to use\n\n\nTraining data\n-------------\n\n\nthe Training data was the same as proposed here\n\n\nThe only diferrence was the hypeparameters of RACE fine tuned model, which were reported here\n\n\nTraining procedure\n------------------\n\n\nIt was necessary to preprocess the data with a method that is exemplified for a single instance in the *How to use* section. The used hyperparameters were the following:\n\n\n\nThe other parameters were the default ones from Trainer and Trainer Arguments\n\n\nEval results:\n-------------\n\n\n\nThe model was trained with a TITAN RTX"
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #multiple-choice #dataset-race #dataset-ai2_arc #dataset-openbookqa #license-mit #endpoints_compatible #region-us \n",
"#### How to use\n\n\nTraining data\n-------------\n\n\nthe Training data was the same as proposed here\n\n\nThe only diferrence was the hypeparameters of RACE fine tuned model, which were reported here\n\n\nTraining procedure\n------------------\n\n\nIt was necessary to preprocess the data with a method that is exemplified for a single instance in the *How to use* section. The used hyperparameters were the following:\n\n\n\nThe other parameters were the default ones from Trainer and Trainer Arguments\n\n\nEval results:\n-------------\n\n\n\nThe model was trained with a TITAN RTX"
] |
multiple-choice | transformers |
# Roberta Large Fine Tuned on RACE
## Model description
This model is a fine-tuned model of Roberta-large applied on RACE
#### How to use
```python
import datasets
from transformers import RobertaTokenizer
from transformers import RobertaForMultipleChoice
tokenizer = RobertaTokenizer.from_pretrained(
"LIAMF-USP/roberta-large-finetuned-race")
model = RobertaForMultipleChoice.from_pretrained(
"LIAMF-USP/roberta-large-finetuned-race")
dataset = datasets.load_dataset(
"race",
"all",
split=["train", "validation", "test"],
)training_examples = dataset[0]
evaluation_examples = dataset[1]
test_examples = dataset[2]
example=training_examples[0]
example_id = example["example_id"]
question = example["question"]
context = example["article"]
options = example["options"]
label_example = example["answer"]
label_map = {label: i
for i, label in enumerate(["A", "B", "C", "D"])}
choices_inputs = []
for ending_idx, (_, ending) in enumerate(
zip(context, options)):
if question.find("_") != -1:
# fill in the banks questions
question_option = question.replace("_", ending)
else:
question_option = question + " " + ending
inputs = tokenizer(
context,
question_option,
add_special_tokens=True,
max_length=MAX_SEQ_LENGTH,
padding="max_length",
truncation=True,
return_overflowing_tokens=False,
)
label = label_map[label_example]
input_ids = [x["input_ids"] for x in choices_inputs]
attention_mask = (
[x["attention_mask"] for x in choices_inputs]
# as the senteces follow the same structure,
#just one of them is necessary to check
if "attention_mask" in choices_inputs[0]
else None
)
example_encoded = {
"example_id": example_id,
"input_ids": input_ids,
"attention_mask": attention_mask,
"label": label,
}
output = model(**example_encoded)
```
## Training data
The initial model was [roberta large model](https://huggingface.co/roberta-large) which was then fine-tuned on [RACE dataset](https://www.cs.cmu.edu/~glai1/data/race/)
## Training procedure
It was necessary to preprocess the data with a method that is exemplified for a single instance in the _How to use_ section. The used hyperparameters were the following:
| Hyperparameter | Value |
|:----:|:----:|
| adam_beta1 | 0.9 |
| adam_beta2 | 0.98 |
| adam_epsilon | 1.000e-8 |
| eval_batch_size | 32 |
| train_batch_size | 1 |
| fp16 | True |
| gradient_accumulation_steps | 16 |
| learning_rate | 0.00001 |
| warmup_steps | 1000 |
| max_length | 512 |
| epochs | 4 |
## Eval results:
| Dataset Acc | Eval | All Test |High School Test |Middle School Test |
|:----:|:----:|:----:|:----:|:----:|
| | 85.2 | 84.9|83.5|88.0|
**The model was trained with a Tesla V100-PCIE-16GB** | {"language": "english", "license": "mit", "datasets": ["race"], "metrics": ["accuracy"]} | LIAMF-USP/roberta-large-finetuned-race | null | [
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"multiple-choice",
"dataset:race",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"english"
] | TAGS
#transformers #pytorch #tf #jax #roberta #multiple-choice #dataset-race #license-mit #endpoints_compatible #region-us
| Roberta Large Fine Tuned on RACE
================================
Model description
-----------------
This model is a fine-tuned model of Roberta-large applied on RACE
#### How to use
Training data
-------------
The initial model was roberta large model which was then fine-tuned on RACE dataset
Training procedure
------------------
It was necessary to preprocess the data with a method that is exemplified for a single instance in the *How to use* section. The used hyperparameters were the following:
Eval results:
-------------
The model was trained with a Tesla V100-PCIE-16GB
| [
"#### How to use\n\n\nTraining data\n-------------\n\n\nThe initial model was roberta large model which was then fine-tuned on RACE dataset\n\n\nTraining procedure\n------------------\n\n\nIt was necessary to preprocess the data with a method that is exemplified for a single instance in the *How to use* section. The used hyperparameters were the following:\n\n\n\nEval results:\n-------------\n\n\n\nThe model was trained with a Tesla V100-PCIE-16GB"
] | [
"TAGS\n#transformers #pytorch #tf #jax #roberta #multiple-choice #dataset-race #license-mit #endpoints_compatible #region-us \n",
"#### How to use\n\n\nTraining data\n-------------\n\n\nThe initial model was roberta large model which was then fine-tuned on RACE dataset\n\n\nTraining procedure\n------------------\n\n\nIt was necessary to preprocess the data with a method that is exemplified for a single instance in the *How to use* section. The used hyperparameters were the following:\n\n\n\nEval results:\n-------------\n\n\n\nThe model was trained with a Tesla V100-PCIE-16GB"
] |
null | null | git lfs install
git clone https://huggingface.co/LPM/AI_1 | {} | LPM/AI_1 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| git lfs install
git clone URL | [] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers |
# Rick DioloGPT Model
| {"tags": ["conversational"]} | LactoseLegend/DialoGPT-small-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DioloGPT Model
| [
"# Rick DioloGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DioloGPT Model"
] |
text-generation | transformers | ### Model information
* Fine tuning dataset: https://www.kaggle.com/seungguini/bts-youtube-comments
* Base model: GPT2 Small
* Epoch: 5
* API page: [Ainize](https://ainize.ai/teachable-ainize/gpt2-train?branch=train/cv695m9g40av0cdabuqp)
* Demo page: [End-point](https://kubecon-tabtab-ainize-team.endpoint.ainize.ai/?modelUrl=https://train-cv695m9g40av0cdabuqp-gpt2-train-teachable-ainize.endpoint.ainize.ai/predictions/gpt-2-en-small-finetune)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
* Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
* Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
| {} | Laeyoung/BTS-comments-generator | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ### Model information
* Fine tuning dataset: URL
* Base model: GPT2 Small
* Epoch: 5
* API page: Ainize
* Demo page: End-point
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
* Teachable NLP: Teachable NLP
* Tutorial: Tutorial
| [
"### Model information\n* Fine tuning dataset: URL\n* Base model: GPT2 Small\n* Epoch: 5\n* API page: Ainize\n* Demo page: End-point",
"### ===Teachable NLP=== ###\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n* Teachable NLP: Teachable NLP\n* Tutorial: Tutorial"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model information\n* Fine tuning dataset: URL\n* Base model: GPT2 Small\n* Epoch: 5\n* API page: Ainize\n* Demo page: End-point",
"### ===Teachable NLP=== ###\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n* Teachable NLP: Teachable NLP\n* Tutorial: Tutorial"
] |
text-generation | transformers |
#Witcher1 Geralt DialoGPT small model | {"tags": ["conversational"]} | Laezor/DialoGPT-small-witcher1 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Witcher1 Geralt DialoGPT small model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
#Yakuza 0 DialoGPT Model | {"tags": ["conversational"]} | Laezor/DialoGPT-small-yakuza_0 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Yakuza 0 DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Dialogue From Persona 3 | {"tags": ["conversational"]} | LaiJY/DialoGPTChatbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Dialogue From Persona 3 | [
"# Dialogue From Persona 3"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Dialogue From Persona 3"
] |
translation | transformers | ### marianmt-th-zh_cn
* source languages: th
* target languages: zh_cn
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set scores: 15.53
## Training
Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-th-zh_cn](https://wandb.ai/cstorm125/marianmt-th-zh_cn).
```
export WANDB_PROJECT=marianmt-th-zh_cn
python train_model.py --input_fname ../data/v1/Train.csv \\\\\\\\
\\\\t--output_dir ../models/marianmt-th-zh_cn \\\\\\\\
\\\\t--source_lang th --target_lang zh \\\\\\\\
\\\\t--metric_tokenize zh --fp16
```
## Usage
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Lalita/marianmt-zh_cn-th")
model = AutoModelForSeq2SeqLM.from_pretrained("Lalita/marianmt-zh_cn-th").cpu()
src_text = [
'ฉันรักคุณ',
'ฉันอยากกินข้าว',
]
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])
> ['我爱你', '我想吃饭。']
```
## Requirements
```
transformers==4.6.0
torch==1.8.0
``` | {"tags": ["translation", "torch==1.8.0"], "widget": [{"text": "Inference Unavailable"}]} | Lalita/marianmt-th-zh_cn | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"torch==1.8.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #torch==1.8.0 #autotrain_compatible #endpoints_compatible #region-us
| ### marianmt-th-zh_cn
* source languages: th
* target languages: zh_cn
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set scores: 15.53
## Training
Training scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-th-zh_cn.
## Usage
## Requirements
| [
"### marianmt-th-zh_cn\n* source languages: th\n* target languages: zh_cn\n* dataset: \n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* test set scores: 15.53",
"## Training\n\nTraining scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-th-zh_cn.",
"## Usage",
"## Requirements"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #torch==1.8.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### marianmt-th-zh_cn\n* source languages: th\n* target languages: zh_cn\n* dataset: \n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* test set scores: 15.53",
"## Training\n\nTraining scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-th-zh_cn.",
"## Usage",
"## Requirements"
] |
translation | transformers | ### marianmt-zh_cn-th
* source languages: zh_cn
* target languages: th
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set scores: syllable: 15.95, word: 8.43
## Training
Training scripts from [LalitaDeelert/NLP-ZH_TH-Project](https://github.com/LalitaDeelert/NLP-ZH_TH-Project). Experiments tracked at [cstorm125/marianmt-zh_cn-th](https://wandb.ai/cstorm125/marianmt-zh_cn-th).
```
export WANDB_PROJECT=marianmt-zh_cn-th
python train_model.py --input_fname ../data/v1/Train.csv \\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t--output_dir ../models/marianmt-zh_cn-th \\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t--source_lang zh --target_lang th \\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t--metric_tokenize th_syllable --fp16
```
## Usage
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Lalita/marianmt-zh_cn-th")
model = AutoModelForSeq2SeqLM.from_pretrained("Lalita/marianmt-zh_cn-th").cpu()
src_text = [
'我爱你',
'我想吃米饭',
]
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])
> ['ผมรักคุณนะ', 'ฉันอยากกินข้าว']
```
## Requirements
```
transformers==4.6.0
torch==1.8.0
``` | {"tags": ["translation", "torch==1.8.0"], "widget": [{"text": "Inference Unavailable"}]} | Lalita/marianmt-zh_cn-th | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"torch==1.8.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #torch==1.8.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### marianmt-zh_cn-th
* source languages: zh_cn
* target languages: th
* dataset:
* model: transformer-align
* pre-processing: normalization + SentencePiece
* test set scores: syllable: 15.95, word: 8.43
## Training
Training scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-zh_cn-th.
## Usage
## Requirements
| [
"### marianmt-zh_cn-th \n* source languages: zh_cn\n* target languages: th\n* dataset: \n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* test set scores: syllable: 15.95, word: 8.43",
"## Training\n\nTraining scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-zh_cn-th.",
"## Usage",
"## Requirements"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #torch==1.8.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### marianmt-zh_cn-th \n* source languages: zh_cn\n* target languages: th\n* dataset: \n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* test set scores: syllable: 15.95, word: 8.43",
"## Training\n\nTraining scripts from LalitaDeelert/NLP-ZH_TH-Project. Experiments tracked at cstorm125/marianmt-zh_cn-th.",
"## Usage",
"## Requirements"
] |
null | speechbrain |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ECAPA-TDNN embeddings on cnceleb
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on cnceleb 1+ cnceleb2 training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance on cnceleb1-test set(Cleaned) is:
| Release | EER(%) | minDCF |
|:-------------:|:--------------:|:--------------:|
## Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Compute your speaker embeddings
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="LanceaKing/spkrec-ecapa-cnceleb")
signal, fs =torchaudio.load('samples/audio_samples/example1.wav')
embeddings = classifier.encode_batch(signal)
```
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
### Perform Speaker Verification
```python
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="LanceaKing/spkrec-ecapa-cnceleb", savedir="pretrained_models/spkrec-ecapa-cnceleb")
score, prediction = verification.verify_files("speechbrain/spkrec-ecapa-cnceleb/example1.wav", "speechbrain/spkrec-ecapa-cnceleb/example2.flac")
```
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/LanceaKing/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/CNCeleb/SpeakerRec
python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
```
@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and Fran莽ois Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/ | {"language": "zh", "license": "apache-2.0", "tags": ["speechbrain", "embeddings", "Speaker", "Verification", "Identification", "pytorch", "ECAPA", "TDNN"], "datasets": ["cnceleb"], "metrics": ["EER"]} | LanceaKing/spkrec-ecapa-cnceleb | null | [
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"ECAPA",
"TDNN",
"zh",
"dataset:cnceleb",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2106.04624"
] | [
"zh"
] | TAGS
#speechbrain #embeddings #Speaker #Verification #Identification #pytorch #ECAPA #TDNN #zh #dataset-cnceleb #arxiv-2106.04624 #license-apache-2.0 #region-us
|
Speaker Verification with ECAPA-TDNN embeddings on cnceleb
==========================================================
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on cnceleb 1+ cnceleb2 training data.
For a better experience, we encourage you to learn more about
SpeechBrain. The model performance on cnceleb1-test set(Cleaned) is:
Pipeline description
--------------------
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
Install SpeechBrain
-------------------
First of all, please install SpeechBrain with the following command:
Please notice that we encourage you to read our tutorials and learn more about
SpeechBrain.
### Compute your speaker embeddings
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify\_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode\_batch* and *classify\_batch*.
### Perform Speaker Verification
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add 'run\_opts={"device":"cuda"}' when calling the 'from\_hparams' method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
2. Install it:
3. Run Training:
You can find our training results (models, logs, etc) here.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
Citing SpeechBrain
==================
Please, cite SpeechBrain if you use it for your research or business.
About SpeechBrain
=================
* Website: URL
* Code: URL
* HuggingFace: URL
| [
"### Compute your speaker embeddings\n\n\nThe system is trained with recordings sampled at 16kHz (single channel).\nThe code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify\\_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode\\_batch* and *classify\\_batch*.",
"### Perform Speaker Verification\n\n\nThe prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.",
"### Training\n\n\nThe model was trained with SpeechBrain (aa018540).\nTo train it from scratch follows these steps:\n\n\n1. Clone SpeechBrain:\n2. Install it:\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) here.",
"### Limitations\n\n\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.",
"#### Referencing ECAPA-TDNN\n\n\nCiting SpeechBrain\n==================\n\n\nPlease, cite SpeechBrain if you use it for your research or business.\n\n\nAbout SpeechBrain\n=================\n\n\n* Website: URL\n* Code: URL\n* HuggingFace: URL"
] | [
"TAGS\n#speechbrain #embeddings #Speaker #Verification #Identification #pytorch #ECAPA #TDNN #zh #dataset-cnceleb #arxiv-2106.04624 #license-apache-2.0 #region-us \n",
"### Compute your speaker embeddings\n\n\nThe system is trained with recordings sampled at 16kHz (single channel).\nThe code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify\\_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode\\_batch* and *classify\\_batch*.",
"### Perform Speaker Verification\n\n\nThe prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.",
"### Training\n\n\nThe model was trained with SpeechBrain (aa018540).\nTo train it from scratch follows these steps:\n\n\n1. Clone SpeechBrain:\n2. Install it:\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) here.",
"### Limitations\n\n\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.",
"#### Referencing ECAPA-TDNN\n\n\nCiting SpeechBrain\n==================\n\n\nPlease, cite SpeechBrain if you use it for your research or business.\n\n\nAbout SpeechBrain\n=================\n\n\n* Website: URL\n* Code: URL\n* HuggingFace: URL"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-starter
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the Langame/starter dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 66.67 | 200 | 3.6445 |
| No log | 133.33 | 400 | 4.5703 |
| 1.0101 | 200.0 | 600 | 5.2109 |
| 1.0101 | 266.67 | 800 | 5.5430 |
| 0.0681 | 333.33 | 1000 | 5.7227 |
| 0.0681 | 400.0 | 1200 | 5.8672 |
| 0.0681 | 466.67 | 1400 | 5.9961 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Langame/starter"], "model-index": [{"name": "distilgpt2-starter", "results": []}]} | Langame/distilgpt2-starter | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:Langame/starter",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #dataset-Langame/starter #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| distilgpt2-starter
==================
This model is a fine-tuned version of distilgpt2 on the Langame/starter dataset.
It achieves the following results on the evaluation set:
* Loss: 6.0234
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 500.0
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 500.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #dataset-Langame/starter #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 500.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.11.0"
] |
text-generation | transformers |
# Langame/gpt2-waiting
This fine-tuned model can generate funny waiting messages.
[Langame](https://langa.me) uses these within its platform 😛.
| {"language": ["en"], "license": "mit", "tags": ["text-generation"], "datasets": ["waiting-messages"], "widget": [{"text": "List of funny waiting messages:", "example_title": "Funny waiting messages"}]} | Langame/gpt2-waiting | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"en",
"dataset:waiting-messages",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #en #dataset-waiting-messages #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Langame/gpt2-waiting
This fine-tuned model can generate funny waiting messages.
Langame uses these within its platform .
| [
"# Langame/gpt2-waiting\n\nThis fine-tuned model can generate funny waiting messages.\n\nLangame uses these within its platform ."
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #en #dataset-waiting-messages #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Langame/gpt2-waiting\n\nThis fine-tuned model can generate funny waiting messages.\n\nLangame uses these within its platform ."
] |
fill-mask | transformers | # Mengzi-BERT base fin model (Chinese)
Continue trained mengzi-bert-base with 20G financial news and research reports. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base-fin")
model = BertModel.from_pretrained("Langboat/mengzi-bert-base-fin")
```
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-bert-base-fin | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"doi:10.57967/hf/0024",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.06696"
] | [
"zh"
] | TAGS
#transformers #pytorch #safetensors #bert #fill-mask #zh #arxiv-2110.06696 #doi-10.57967/hf/0024 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # Mengzi-BERT base fin model (Chinese)
Continue trained mengzi-bert-base with 20G financial news and research reports. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese
## Usage
If you find the technical report or resource is useful, please cite the following technical report in your paper.
| [
"# Mengzi-BERT base fin model (Chinese)\nContinue trained mengzi-bert-base with 20G financial news and research reports. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.\n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese",
"## Usage\n\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #zh #arxiv-2110.06696 #doi-10.57967/hf/0024 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Mengzi-BERT base fin model (Chinese)\nContinue trained mengzi-bert-base with 20G financial news and research reports. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.\n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese",
"## Usage\n\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] |
fill-mask | transformers |
# Mengzi-BERT base model (Chinese)
Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
[Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base")
model = BertModel.from_pretrained("Langboat/mengzi-bert-base")
```
## Scores on nine chinese tasks (without any data augmentation)
| Model | AFQMC | TNEWS | IFLYTEK | CMNLI | WSC | CSL | CMRC2018 | C3 | CHID |
|-|-|-|-|-|-|-|-|-|-|
|RoBERTa-wwm-ext| 74.30 | 57.51 | 60.80 | 80.70 | 67.20 | 80.67 | 77.59 | 67.06 | 83.78 |
|Mengzi-BERT-base| 74.58 | 57.97 | 60.68 | 82.12 | 87.50 | 85.40 | 78.54 | 71.70 | 84.16 |
RoBERTa-wwm-ext scores are from CLUE baseline
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0", "widget": [{"text": "\u751f\u6d3b\u7684\u771f\u8c1b\u662f[MASK]\u3002"}]} | Langboat/mengzi-bert-base | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"doi:10.57967/hf/0023",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.06696"
] | [
"zh"
] | TAGS
#transformers #pytorch #bert #fill-mask #zh #arxiv-2110.06696 #doi-10.57967/hf/0023 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| Mengzi-BERT base model (Chinese)
================================
Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model
Usage
-----
Scores on nine chinese tasks (without any data augmentation)
------------------------------------------------------------
RoBERTa-wwm-ext scores are from CLUE baseline
If you find the technical report or resource is useful, please cite the following technical report in your paper.
| [] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #zh #arxiv-2110.06696 #doi-10.57967/hf/0023 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask | transformers |
# Mengzi-oscar-base-caption (Chinese Multi-modal Image Caption model)
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
Mengzi-oscar-base-caption is fine-tuned based on Chinese multi-modal pre-training model [Mengzi-Oscar](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md), on AIC-ICC Chinese image caption dataset.
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-oscar-base-caption | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.06696"
] | [
"zh"
] | TAGS
#transformers #pytorch #bert #fill-mask #zh #arxiv-2110.06696 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Mengzi-oscar-base-caption (Chinese Multi-modal Image Caption model)
Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese
Mengzi-oscar-base-caption is fine-tuned based on Chinese multi-modal pre-training model Mengzi-Oscar, on AIC-ICC Chinese image caption dataset.
## Usage
#### Installation
Check URL for installation instructions.
#### Pretrain & fine-tune
See the URL for details.
If you find the technical report or resource is useful, please cite the following technical report in your paper.
| [
"# Mengzi-oscar-base-caption (Chinese Multi-modal Image Caption model)\n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese\n\nMengzi-oscar-base-caption is fine-tuned based on Chinese multi-modal pre-training model Mengzi-Oscar, on AIC-ICC Chinese image caption dataset.",
"## Usage",
"#### Installation\nCheck URL for installation instructions.",
"#### Pretrain & fine-tune\nSee the URL for details.\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #zh #arxiv-2110.06696 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Mengzi-oscar-base-caption (Chinese Multi-modal Image Caption model)\n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese\n\nMengzi-oscar-base-caption is fine-tuned based on Chinese multi-modal pre-training model Mengzi-Oscar, on AIC-ICC Chinese image caption dataset.",
"## Usage",
"#### Installation\nCheck URL for installation instructions.",
"#### Pretrain & fine-tune\nSee the URL for details.\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] |
fill-mask | transformers | # Mengzi-oscar-base-retrieval (Chinese Image-text retrieval model)
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
Mengzi-oscar-base-retrieval is fine-tuned based on Chinese multi-modal pre-training model [Mengzi-Oscar](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md), on COCO-ir dataset.
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-oscar-base-retrieval | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.06696"
] | [
"zh"
] | TAGS
#transformers #pytorch #bert #fill-mask #zh #arxiv-2110.06696 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| # Mengzi-oscar-base-retrieval (Chinese Image-text retrieval model)
Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese
Mengzi-oscar-base-retrieval is fine-tuned based on Chinese multi-modal pre-training model Mengzi-Oscar, on COCO-ir dataset.
## Usage
#### Installation
Check URL for installation instructions.
#### Pretrain & fine-tune
See the URL for details.
If you find the technical report or resource is useful, please cite the following technical report in your paper.
| [
"# Mengzi-oscar-base-retrieval (Chinese Image-text retrieval model)\n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese\n\nMengzi-oscar-base-retrieval is fine-tuned based on Chinese multi-modal pre-training model Mengzi-Oscar, on COCO-ir dataset.",
"## Usage",
"#### Installation\nCheck URL for installation instructions.",
"#### Pretrain & fine-tune\nSee the URL for details.\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #zh #arxiv-2110.06696 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Mengzi-oscar-base-retrieval (Chinese Image-text retrieval model)\n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese\n\nMengzi-oscar-base-retrieval is fine-tuned based on Chinese multi-modal pre-training model Mengzi-Oscar, on COCO-ir dataset.",
"## Usage",
"#### Installation\nCheck URL for installation instructions.",
"#### Pretrain & fine-tune\nSee the URL for details.\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] |
fill-mask | transformers |
# Mengzi-oscar-base (Chinese Multi-modal pre-training model)
Mengzi-oscar is trained based on the Multi-modal pre-training model [Oscar](https://github.com/microsoft/Oscar), and is initialized using [Mengzi-Bert-Base](https://github.com/Langboat/Mengzi). 3.7M pairs of images and texts were used, including 0.7M Chinese image-caption pairs, 3M Chinese image-question pairs, a total of 0.22M different images.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
#### Installation
Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions.
#### Pretrain & fine-tune
See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details.
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-oscar-base | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2110.06696",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.06696"
] | [
"zh"
] | TAGS
#transformers #pytorch #bert #fill-mask #zh #arxiv-2110.06696 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Mengzi-oscar-base (Chinese Multi-modal pre-training model)
Mengzi-oscar is trained based on the Multi-modal pre-training model Oscar, and is initialized using Mengzi-Bert-Base. 3.7M pairs of images and texts were used, including 0.7M Chinese image-caption pairs, 3M Chinese image-question pairs, a total of 0.22M different images.
Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese
## Usage
#### Installation
Check URL for installation instructions.
#### Pretrain & fine-tune
See the URL for details.
If you find the technical report or resource is useful, please cite the following technical report in your paper.
| [
"# Mengzi-oscar-base (Chinese Multi-modal pre-training model)\nMengzi-oscar is trained based on the Multi-modal pre-training model Oscar, and is initialized using Mengzi-Bert-Base. 3.7M pairs of images and texts were used, including 0.7M Chinese image-caption pairs, 3M Chinese image-question pairs, a total of 0.22M different images.\n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese",
"## Usage",
"#### Installation\nCheck URL for installation instructions.",
"#### Pretrain & fine-tune\nSee the URL for details.\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #zh #arxiv-2110.06696 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Mengzi-oscar-base (Chinese Multi-modal pre-training model)\nMengzi-oscar is trained based on the Multi-modal pre-training model Oscar, and is initialized using Mengzi-Bert-Base. 3.7M pairs of images and texts were used, including 0.7M Chinese image-caption pairs, 3M Chinese image-question pairs, a total of 0.22M different images.\n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese",
"## Usage",
"#### Installation\nCheck URL for installation instructions.",
"#### Pretrain & fine-tune\nSee the URL for details.\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] |
text2text-generation | transformers |
# Mengzi-T5 model (Chinese)
Pretrained model on 300G Chinese corpus.
[Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Langboat/mengzi-t5-base")
model = T5ForConditionalGeneration.from_pretrained("Langboat/mengzi-t5-base")
```
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
```
@misc{zhang2021mengzi,
title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
year={2021},
eprint={2110.06696},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": ["zh"], "license": "apache-2.0"} | Langboat/mengzi-t5-base | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"zh",
"arxiv:2110.06696",
"doi:10.57967/hf/0025",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2110.06696"
] | [
"zh"
] | TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #zh #arxiv-2110.06696 #doi-10.57967/hf/0025 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Mengzi-T5 model (Chinese)
Pretrained model on 300G Chinese corpus.
Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese
## Usage
If you find the technical report or resource is useful, please cite the following technical report in your paper.
| [
"# Mengzi-T5 model (Chinese)\nPretrained model on 300G Chinese corpus. \n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese",
"## Usage\n\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #zh #arxiv-2110.06696 #doi-10.57967/hf/0025 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Mengzi-T5 model (Chinese)\nPretrained model on 300G Chinese corpus. \n\nMengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese",
"## Usage\n\n\nIf you find the technical report or resource is useful, please cite the following technical report in your paper."
] |
text-generation | transformers |
# Gandalf DialoGPT Model | {"tags": ["conversational"]} | Laptop/DialoGPT-small-gandalf | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Gandalf DialoGPT Model | [
"# Gandalf DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Gandalf DialoGPT Model"
] |
token-classification | transformers |
## DeFormer
DeFormer är en modell som har tränats på att skilja mellan `de` och `dem` i svenska meningar. Modellen kan testas direkt i panelerna till höger under **Hosted Inference API** genom att skriva in en mening och trycka på **Compute**.
**Uppdatering 2023-05-06:** Modellen kan nu hantera även borttappade t:n i de**t**. Den nya versionen har tränats till att skilja mellan de, det och dem; samt enda och ända.
**Instruktioner:**
Använd endast de/dem/enda/ända med små bokstäver vid testning. Vid träning av modellen gjordes alla "De" och "Dem" om till gemener.
## Träningsdata
DeFormer har tränats på meningar från Europarlamentet och svenskspråkiga Wikimedia. Dessa hämtades från [OPUS](https://opus.nlpl.eu/). Källorna valdes ut för att de antogs ha ett korrekt språkbruk.
Endast meningar innehållandes `de`, `dem`, `det`, `enda` eller `ända` behölls i konstruktionen av träningsdataset. I tabellen nedan återfinns beskrivande statistik över antalet meningar som behölls från respektive dataset, samt frekvenser över förekomster av respektive ord.
| Datakälla | Meningar/dokument | # De | # Dem | # Det | # Enda | # Ända |
| ----------- | ----------- | ----------- | ----------- | -------------|---------- | --------- |
| [Europaparl sv.txt.gz](https://opus.nlpl.eu/download.php?f=Europarl/v8/mono/sv.txt.gz) | 1150556 | 461305 | 53726 | 824065 | 15553 | 1781 |
| [JRC-Acquis raw.sv.gz](https://opus.nlpl.eu/download.php?f=JRC-Acquis/mono/JRC-Acquis.raw.sv.gz) | 648387 | 399628 | 16539 | 326925 | 5975 | 267 |
| [Wikimedia sv.txt.gz](https://opus.nlpl.eu/download.php?f=wikimedia/v20210402/mono/sv.txt.gz) | 1615505 | 598371 | 38649 | 594038 | 24805 | 7063 |
| [Riksdagens anföranden](https://data.riksdagen.se/data/anforanden/) | 671031 | 497515 | 118069 | 659051 | 25912 | 4917 |
| [Riksdagens motioner (2014-2022)](https://data.riksdagen.se/data/dokument/) | 85124 | 85124 | 11773 | 104526 | 2740 | 453 |
| [SweDN (Superlim 2)](https://spraakbanken.gu.se/en/resources/swedn) | 93026 | 70254 | 16399 | 88087 | 5104 | 1236 |
| **Total** | **4286974** | **2112197** | **255155** | **2596692** | **80089** | **15717** |
Vid träningen av DeFormer introducerades slumpmässiga substitioner, där ovanstående ord byttes ut mot de former som de vanligen förväxlas med. Modellen utmanades sedan att klassificera huruvida ett givet ord tillhörde ett av följande kategorier
1. **`ord`** (alla bakgrundsord som inte är de/dem tillhör denna kategori)
2. **`DE`**
3. **`DEM`**
4. **`DET`**
5. **`ENDA`**
6. **`ÄNDA`**
Innan observationerna skickades in till modellträning byttes `de` ut mot `det` eller `dem` med cirka 50 procents sannolikhet, medan `dem` byttes till `de` i 40 procent av fallen. Liknande substutioner gjordes mellan `enda` och `ända`.
## Träffsäkerhet/Accuracy
DeFormer utvärderades på ett valideringsset bestående av 31200 meningar från samma datakälla (svenska wiki + europaparlamentet + JRC) som modellen tränats på. Slumpmässiga fel introducerades för att utmana modellen. 47 procent av förekommande `de` i ursprungsmeningarna ändrades till `dem`, medan 40 procent av förekommande `dem` ändrades till `de`. Tabellen nedan visar att DeFormer är väldigt träffsäker. De få "felaktiga" prediktioner som modellen outputtar är nästan samtliga `de/dem som`-konstruktioner med bisatser. Majoriteten av dessa är egentligen inte att anse som felaktiga, eftersom [båda formerna är accepterade](https://www4.isof.se/cgi-bin/srfl/visasvar.py?sok=dem%20som&svar=79718&log_id=705355).
**OBS:** Tabellen nedan gäller för den äldre varianten av DeFormer som endast skiljde mellan `de` och `dem`.
| | Accuracy |
| ----------- | ----------- |
| de | 99.9\% |
| dem | 98.6\% | | {"widget": [{"text": "dem har s\u00f6kt upp de f\u00f6r att prata.", "example_title": "de/dem exempel 1"}, {"text": "Jag s\u00e5g de komma runt h\u00f6rnet och g\u00e5 i riktning mot dem byggnaderna.", "example_title": "de/dem exempel 2"}, {"text": "de \u00e4r ganska tr\u00e5kigt att de blivit s\u00e5h\u00e4r, men de va de \u00e4nda jag kunde g\u00f6ra", "example_title": "enda/\u00e4nda och de(t)"}]} | Lauler/deformer | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"doi:10.57967/hf/0612",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #token-classification #doi-10.57967/hf/0612 #autotrain_compatible #endpoints_compatible #region-us
| DeFormer
--------
DeFormer är en modell som har tränats på att skilja mellan 'de' och 'dem' i svenska meningar. Modellen kan testas direkt i panelerna till höger under Hosted Inference API genom att skriva in en mening och trycka på Compute.
Uppdatering 2023-05-06: Modellen kan nu hantera även borttappade t:n i det. Den nya versionen har tränats till att skilja mellan de, det och dem; samt enda och ända.
Instruktioner:
Använd endast de/dem/enda/ända med små bokstäver vid testning. Vid träning av modellen gjordes alla "De" och "Dem" om till gemener.
Träningsdata
------------
DeFormer har tränats på meningar från Europarlamentet och svenskspråkiga Wikimedia. Dessa hämtades från OPUS. Källorna valdes ut för att de antogs ha ett korrekt språkbruk.
Endast meningar innehållandes 'de', 'dem', 'det', 'enda' eller 'ända' behölls i konstruktionen av träningsdataset. I tabellen nedan återfinns beskrivande statistik över antalet meningar som behölls från respektive dataset, samt frekvenser över förekomster av respektive ord.
Vid träningen av DeFormer introducerades slumpmässiga substitioner, där ovanstående ord byttes ut mot de former som de vanligen förväxlas med. Modellen utmanades sedan att klassificera huruvida ett givet ord tillhörde ett av följande kategorier
1. 'ord' (alla bakgrundsord som inte är de/dem tillhör denna kategori)
2. 'DE'
3. 'DEM'
4. 'DET'
5. 'ENDA'
6. 'ÄNDA'
Innan observationerna skickades in till modellträning byttes 'de' ut mot 'det' eller 'dem' med cirka 50 procents sannolikhet, medan 'dem' byttes till 'de' i 40 procent av fallen. Liknande substutioner gjordes mellan 'enda' och 'ända'.
Träffsäkerhet/Accuracy
----------------------
DeFormer utvärderades på ett valideringsset bestående av 31200 meningar från samma datakälla (svenska wiki + europaparlamentet + JRC) som modellen tränats på. Slumpmässiga fel introducerades för att utmana modellen. 47 procent av förekommande 'de' i ursprungsmeningarna ändrades till 'dem', medan 40 procent av förekommande 'dem' ändrades till 'de'. Tabellen nedan visar att DeFormer är väldigt träffsäker. De få "felaktiga" prediktioner som modellen outputtar är nästan samtliga 'de/dem som'-konstruktioner med bisatser. Majoriteten av dessa är egentligen inte att anse som felaktiga, eftersom båda formerna är accepterade.
OBS: Tabellen nedan gäller för den äldre varianten av DeFormer som endast skiljde mellan 'de' och 'dem'.
| [] | [
"TAGS\n#transformers #pytorch #bert #token-classification #doi-10.57967/hf/0612 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3793
- Accuracy: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3542 | 1.0 | 125 | 0.3611 | 0.839 |
| 0.2255 | 2.0 | 250 | 0.3793 | 0.8404 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy"], "model-index": [{"name": "results", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}, "metrics": [{"type": "accuracy", "value": 0.8404, "name": "Accuracy"}]}]}]} | Lazaro97/results | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| results
=======
This model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3793
* Accuracy: 0.8404
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 base model trained on 1K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-1K-base | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2309.05472"
] | [
"fr"
] | TAGS
#transformers #pytorch #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us
|
# LeBenchmark: wav2vec2 base model trained on 1K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).
- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
If interested, simply follow this tutorial
## Referencing LeBenchmark
| [
"# LeBenchmark: wav2vec2 base model trained on 1K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeBenchmark: wav2vec2 base model trained on 1K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 large model trained on 1K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-1K-large | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2309.05472"
] | [
"fr"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us
|
# LeBenchmark: wav2vec2 large model trained on 1K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).
- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
If interested, simply follow this tutorial
## Referencing LeBenchmark
| [
"# LeBenchmark: wav2vec2 large model trained on 1K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeBenchmark: wav2vec2 large model trained on 1K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 base model trained on 2.6K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-2.6K-base | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2309.05472"
] | [
"fr"
] | TAGS
#transformers #pytorch #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us
|
# LeBenchmark: wav2vec2 base model trained on 2.6K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).
- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
If interested, simply follow this tutorial
## Referencing LeBenchmark
| [
"# LeBenchmark: wav2vec2 base model trained on 2.6K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeBenchmark: wav2vec2 base model trained on 2.6K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 base model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-3K-base | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2309.05472"
] | [
"fr"
] | TAGS
#transformers #pytorch #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us
|
# LeBenchmark: wav2vec2 base model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).
- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
If interested, simply follow this tutorial
## Referencing LeBenchmark
| [
"# LeBenchmark: wav2vec2 base model trained on 3K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeBenchmark: wav2vec2 base model trained on 3K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 large model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-3K-large | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2309.05472"
] | [
"fr"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us
|
# LeBenchmark: wav2vec2 large model trained on 3K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).
- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
If interested, simply follow this tutorial
## Referencing LeBenchmark
| [
"# LeBenchmark: wav2vec2 large model trained on 3K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeBenchmark: wav2vec2 large model trained on 3K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 base model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-7K-base | null | [
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2309.05472"
] | [
"fr"
] | TAGS
#transformers #pytorch #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us
|
# LeBenchmark: wav2vec2 base model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).
- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
If interested, simply follow this tutorial
## Referencing LeBenchmark
| [
"# LeBenchmark: wav2vec2 base model trained on 7K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeBenchmark: wav2vec2 base model trained on 7K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] |
feature-extraction | transformers |
# LeBenchmark: wav2vec2 large model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472)
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**).
- [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english).
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
**If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)**
## Referencing LeBenchmark
```
@misc{parcollet2023lebenchmark,
title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech},
author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier},
year={2023},
eprint={2309.05472},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "fr", "license": "apache-2.0", "tags": ["wav2vec2"]} | LeBenchmark/wav2vec2-FR-7K-large | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"feature-extraction",
"fr",
"arxiv:2309.05472",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2309.05472"
] | [
"fr"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us
|
# LeBenchmark: wav2vec2 large model trained on 7K hours of French speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.
For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech
## Model and data descriptions
We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:
## *Lebenchmark 2.0:*
- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).
## *Lebenchmark:*
- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).
- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).
- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).
- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Fine-tune with Fairseq for ASR with CTC
As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.
Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.
## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...
Pretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.
While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!
1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...
2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.
If interested, simply follow this tutorial
## Referencing LeBenchmark
| [
"# LeBenchmark: wav2vec2 large model trained on 7K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #feature-extraction #fr #arxiv-2309.05472 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeBenchmark: wav2vec2 large model trained on 7K hours of French speech\n\n \n\nLeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks.\nFor more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech",
"## Model and data descriptions\n\n \nWe release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short:",
"## *Lebenchmark 2.0:*\n- wav2vec2-FR-14K-xlarge: xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-large: Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).\n- wav2vec2-FR-14K-light: Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown).",
"## *Lebenchmark:*\n- wav2vec2-FR-7K-large: Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-7K-base: Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown).\n- wav2vec2-FR-3K-large: Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-3K-base: Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown).\n- wav2vec2-FR-2.6K-base: Base wav2vec2 trained on 2.6K hours of French speech (no spontaneous speech).\n- wav2vec2-FR-1K-large: Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).\n- wav2vec2-FR-1K-base: Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females).",
"## Intended uses & limitations\n\nPretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.",
"## Fine-tune with Fairseq for ASR with CTC\n\nAs our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in this blogpost.\n\nPlease note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part.",
"## Integrate to SpeechBrain for ASR, Speaker, Source Separation ...\n\nPretrained wav2vec models recently gained in popularity. At the same time, SpeechBrain toolkit came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies.\n\nWhile it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models!\n\n 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ...\n 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer.\n\nIf interested, simply follow this tutorial",
"## Referencing LeBenchmark"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_cv7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6567
- Wer: 0.6273
- Cer: 0.2093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.6969 | 9.52 | 400 | 3.3092 | 1.0 | 0.9800 |
| 1.7721 | 19.05 | 800 | 0.7769 | 0.7045 | 0.2367 |
| 0.6384 | 28.57 | 1200 | 0.6567 | 0.6273 | 0.2093 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "Wav2Vec2_xls_r_300m_hi_cv7", "results": []}]} | LegolasTheElf/Wav2Vec2_xls_r_300m_hi_cv7 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
| Wav2Vec2\_xls\_r\_300m\_hi\_cv7
===============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6567
* Wer: 0.6273
* Cer: 0.2093
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 35
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 35\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 35\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- Wer: 0.3137
- Cer: 0.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 |
| 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 |
| 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 |
| 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 |
| 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 |
| 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 |
| 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 |
| 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 |
| 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 |
| 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 |
| 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 |
| 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "Openslr Multilingual", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "model-index": [{"name": "Wav2Vec2_xls_r_300m_hi_final", "results": []}]} | LegolasTheElf/Wav2Vec2_xls_r_300m_hi_final | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Openslr Multilingual",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #Openslr Multilingual #mozilla-foundation/common_voice_7_0 #generated_from_trainer #hi #license-apache-2.0 #endpoints_compatible #region-us
| Wav2Vec2\_xls\_r\_300m\_hi\_final
=================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the 'Openslr Multilingual and code-switching ASR challenge' dataset and 'mozilla-foundation/common\_voice\_7\_0' dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3035
* Wer: 0.3137
* Cer: 0.0972
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 8
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 8\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #Openslr Multilingual #mozilla-foundation/common_voice_7_0 #generated_from_trainer #hi #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 8\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_300m_hi_final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the ['Openslr Multilingual and code-switching ASR challenge'](http://www.openslr.org/103/) dataset and ['mozilla-foundation/common_voice_7_0'](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- Wer: 0.3137
- Cer: 0.0972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 0.9821 | 0.64 | 400 | 0.5059 | 0.4783 | 0.1573 |
| 0.6861 | 1.28 | 800 | 0.4201 | 0.4247 | 0.1356 |
| 0.585 | 1.92 | 1200 | 0.3797 | 0.3811 | 0.1210 |
| 0.5193 | 2.56 | 1600 | 0.3577 | 0.3652 | 0.1152 |
| 0.4583 | 3.21 | 2000 | 0.3422 | 0.3519 | 0.1111 |
| 0.4282 | 3.85 | 2400 | 0.3261 | 0.3450 | 0.1071 |
| 0.3951 | 4.49 | 2800 | 0.3201 | 0.3325 | 0.1048 |
| 0.3619 | 5.13 | 3200 | 0.3167 | 0.3296 | 0.1030 |
| 0.345 | 5.77 | 3600 | 0.3157 | 0.3210 | 0.1013 |
| 0.338 | 6.41 | 4000 | 0.3051 | 0.3143 | 0.0982 |
| 0.3155 | 7.05 | 4400 | 0.3059 | 0.3154 | 0.0986 |
| 0.3057 | 7.69 | 4800 | 0.3035 | 0.3137 | 0.0972 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 | {"language": ["hi"], "license": "apache-2.0", "tags": ["Openslr Multilingual", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "Wav2Vec2_xls_r_300m_hi_final", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "hi"}, "metrics": [{"type": "wer", "value": 34.21, "name": "Test WER"}]}]}]} | LegolasTheElf/Wav2Vec2_xls_r_lm_300m_hi | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Openslr Multilingual",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #Openslr Multilingual #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Wav2Vec2\_xls\_r\_300m\_hi\_final
=================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the 'Openslr Multilingual and code-switching ASR challenge' dataset and 'mozilla-foundation/common\_voice\_7\_0' dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3035
* Wer: 0.3137
* Cer: 0.0972
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 8
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 8\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #Openslr Multilingual #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #hi #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 8\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2_xls_r_openslr_Hi_V2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Harveenchadha/indic-voice](https://huggingface.co/datasets/Harveenchadha/indic-voice) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3184
- Wer: 0.3104
- Cer: 0.0958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:------:|:---------------:|:------:|
| 7.1097 | 0.48 | 300 | 0.9965 | 3.3989 | 1.0 |
| 3.0235 | 0.96 | 600 | 0.3163 | 1.3183 | 0.7977 |
| 1.1419 | 1.44 | 900 | 0.1913 | 0.6416 | 0.5543 |
| 0.8242 | 1.92 | 1200 | 0.1608 | 0.5063 | 0.4804 |
| 0.6876 | 2.56 | 1600 | 0.1387 | 0.4401 | 0.4280 |
| 0.5868 | 3.21 | 2000 | 0.1249 | 0.3940 | 0.3907 |
| 0.5285 | 3.85 | 2400 | 0.1200 | 0.3661 | 0.3763 |
| 0.5 | 4.49 | 2800 | 0.3528 | 0.3610 | 0.1136 |
| 0.4538 | 5.13 | 3200 | 0.3403 | 0.3485 | 0.1086 |
| 0.4165 | 5.77 | 3600 | 0.3335 | 0.3439 | 0.1062 |
| 0.3989 | 6.41 | 4000 | 0.3264 | 0.3340 | 0.1036 |
| 0.3679 | 7.05 | 4400 | 0.3256 | 0.3287 | 0.1013 |
| 0.3517 | 7.69 | 4800 | 0.3212 | 0.3223 | 0.1002 |
| 0.3357 | 8.33 | 5200 | 0.3173 | 0.3196 | 0.0986 |
| 0.3225 | 8.97 | 5600 | 0.3142 | 0.3177 | 0.0985 |
| 0.3057 | 9.62 | 6000 | 0.3199 | 0.3156 | 0.0975 |
| 0.2972 | 10.26 | 6400 | 0.3139 | 0.3128 | 0.0967 |
| 0.2881 | 10.9 | 6800 | 0.3184 | 0.3107 | 0.0957 |
| 0.2791 | 11.54 | 7200 | 0.3184 | 0.3104 | 0.0958 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"language": ["hi"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "Harveenchadha/indic-voice", "generated_from_trainer"], "model-index": [{"name": "Wav2Vec2_xls_r_openslr_Hi_V2", "results": []}]} | LegolasTheElf/Wav2Vec2_xls_r_openslr_Hi_V2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"Harveenchadha/indic-voice",
"generated_from_trainer",
"hi",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"hi"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #Harveenchadha/indic-voice #generated_from_trainer #hi #license-apache-2.0 #endpoints_compatible #region-us
| Wav2Vec2\_xls\_r\_openslr\_Hi\_V2
=================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the Harveenchadha/indic-voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3184
* Wer: 0.3104
* Cer: 0.0958
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 200
* num\_epochs: 12
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #Harveenchadha/indic-voice #generated_from_trainer #hi #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 12\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5561 | 1.0 | 782 | 2.3738 |
| 2.4474 | 2.0 | 1564 | 2.3108 |
| 2.4037 | 3.0 | 2346 | 2.3017 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "model-index": [{"name": "distilbert-base-uncased-finetuned-imdb", "results": []}]} | Leisa/distilbert-base-uncased-finetuned-imdb | null | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #fill-mask #generated_from_trainer #dataset-imdb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-imdb
======================================
This model is a fine-tuned version of distilbert-base-uncased on the imdb dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3114
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0
* Datasets 1.15.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #distilbert #fill-mask #generated_from_trainer #dataset-imdb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
translation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Bleu: 52.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "datasets": ["kde4"], "metrics": ["bleu"], "model-index": [{"name": "marian-finetuned-kde4-en-to-fr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "kde4", "type": "kde4", "args": "en-fr"}, "metrics": [{"type": "bleu", "value": 52.94538305859332, "name": "Bleu"}]}]}]} | Leisa/marian-finetuned-kde4-en-to-fr | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #generated_from_trainer #dataset-kde4 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8558
- Bleu: 52.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.10.3
| [
"# marian-finetuned-kde4-en-to-fr\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on the kde4 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.8558\n- Bleu: 52.9454",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #generated_from_trainer #dataset-kde4 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# marian-finetuned-kde4-en-to-fr\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on the kde4 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.8558\n- Bleu: 52.9454",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
## Model description
We fine-tuned a wav2vec 2.0 large XLSR-53 checkpoint with 842h of unlabelled Luxembourgish speech
collected from [RTL.lu](https://www.rtl.lu/). Then the model was fine-tuned on 4h of labelled
Luxembourgish speech from the same domain.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
## Citation
This model is a result of our paper `IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS` submitted to the [IEEE SLT 2022 workshop](https://slt2022.org/)
```
@misc{lb-wav2vec2,
author = {Nguyen, Le Minh and Nayak, Shekhar and Coler, Matt.},
keywords = {Luxembourgish, multilingual speech recognition, language modelling, wav2vec 2.0 XLSR-53, under-resourced language},
title = {IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS},
year = {2022},
copyright = {2023 IEEE}
}
``` | {"language": ["lb"], "license": "mit", "tags": ["automatic-speech-recognition", "generated_from_trainer"], "metrics": ["wer"], "pipeline_tag": "automatic-speech-recognition"} | Lemswasabi/wav2vec2-large-xlsr-53-842h-luxembourgish-4h | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"lb",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"lb"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #lb #license-mit #model-index #endpoints_compatible #region-us
|
#
## Model description
We fine-tuned a wav2vec 2.0 large XLSR-53 checkpoint with 842h of unlabelled Luxembourgish speech
collected from URL. Then the model was fine-tuned on 4h of labelled
Luxembourgish speech from the same domain.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
This model is a result of our paper 'IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS' submitted to the IEEE SLT 2022 workshop
| [
"#",
"## Model description\n\nWe fine-tuned a wav2vec 2.0 large XLSR-53 checkpoint with 842h of unlabelled Luxembourgish speech\ncollected from URL. Then the model was fine-tuned on 4h of labelled\nLuxembourgish speech from the same domain.",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-05\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 50.0\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.20.0.dev0\n- Pytorch 1.11.0+cu113\n- Datasets 2.2.1\n- Tokenizers 0.12.1\n\nThis model is a result of our paper 'IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS' submitted to the IEEE SLT 2022 workshop"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #lb #license-mit #model-index #endpoints_compatible #region-us \n",
"#",
"## Model description\n\nWe fine-tuned a wav2vec 2.0 large XLSR-53 checkpoint with 842h of unlabelled Luxembourgish speech\ncollected from URL. Then the model was fine-tuned on 4h of labelled\nLuxembourgish speech from the same domain.",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7.5e-05\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 12\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2000\n- num_epochs: 50.0\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.20.0.dev0\n- Pytorch 1.11.0+cu113\n- Datasets 2.2.1\n- Tokenizers 0.12.1\n\nThis model is a result of our paper 'IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS' submitted to the IEEE SLT 2022 workshop"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.25 | 1.0 | 1273 | 0.8052 |
| 1.1199 | 2.0 | 2546 | 0.7950 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv", "results": []}]} | LenaSchmidt/distilbert-base-uncased-finetuned-squad-Endpoint_with_impossible.csv | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-squad-Endpoint\_with\_impossible.csv
======================================================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7950
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0325 | 1.0 | 585 | 1.7520 |
| 1.609 | 2.0 | 1170 | 1.7713 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | LenaSchmidt/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7713
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7608 | 1.0 | 2334 | 3.6655 |
| 3.6335 | 2.0 | 4668 | 3.6455 |
| 3.6066 | 3.0 | 7002 | 3.6424 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-finetuned-wikitext2", "results": []}]} | LenaT/distilgpt2-finetuned-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| distilgpt2-finetuned-wikitext2
==============================
This model is a fine-tuned version of distilgpt2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.6424
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.11.2
* Pytorch 1.9.0+cu102
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first
This model is a fine-tuned version of [longformer-gottbert-base-8192-aw512-](https://huggingface.co/longformer-8192-aw512-gottbert-base) on the a 500 million token subset of the german parts of the OSCAR dataset.
It achieves the following results on the custom evaluation set:
- Loss: 1.4981
## Model description
The weights of the model are initialized from the german version of Roberta [gottbert-base](https://huggingface.co/uklfr/gottbert-base).
The local attention windows have a fixed size of 512 tokens across all layers.
The maximum sequence length is 8192.
## Intended uses & limitations
Longformer models enable processing long texts using a mixture of local attention on each subword token and task specific global attention on a subset of the tokens.
## Training and evaluation data
The [OSCAR](https://oscar-corpus.com) dataset is freely avaible corpus of filtered web texts from the Common Crawl in various languages. We used the 2017 version of the dataset.
## Training procedure
The model was trained with masked language modeling for 3 epochs on a customly created 500 million tokens subset of the german proportion of the [OSCAR](https://oscar-corpus.com) dataset.
It was validated using 5% of the original subset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5636 | 0.1 | 500 | 2.2399 |
| 2.0426 | 0.2 | 1000 | 1.8841 |
| 1.9653 | 0.3 | 1500 | 1.7807 |
| 1.9422 | 0.4 | 2000 | 1.7206 |
| 1.9323 | 0.49 | 2500 | 1.6800 |
| 1.7587 | 0.59 | 3000 | 1.6507 |
| 1.7239 | 0.69 | 3500 | 1.6316 |
| 1.7452 | 0.79 | 4000 | 1.6137 |
| 1.7415 | 0.89 | 4500 | 1.5983 |
| 1.7733 | 0.99 | 5000 | 1.5830 |
| 1.7656 | 1.09 | 5500 | 1.5735 |
| 1.6543 | 1.19 | 6000 | 1.5643 |
| 1.7131 | 1.28 | 6500 | 1.5546 |
| 1.6456 | 1.38 | 7000 | 1.5503 |
| 1.716 | 1.48 | 7500 | 1.5422 |
| 1.806 | 1.58 | 8000 | 1.5377 |
| 1.8407 | 1.68 | 8500 | 1.5327 |
| 1.6371 | 1.78 | 9000 | 1.5278 |
| 1.6453 | 1.88 | 9500 | 1.5231 |
| 1.7754 | 1.98 | 10000 | 1.5214 |
| 1.7695 | 2.08 | 10500 | 1.5165 |
| 1.7109 | 2.17 | 11000 | 1.5138 |
| 1.6992 | 2.27 | 11500 | 1.5107 |
| 1.6707 | 2.37 | 12000 | 1.5097 |
| 1.6835 | 2.47 | 12500 | 1.5040 |
| 1.7171 | 2.57 | 13000 | 1.5041 |
| 1.7257 | 2.67 | 13500 | 1.4990 |
| 1.6287 | 2.77 | 14000 | 1.5017 |
| 1.7737 | 2.87 | 14500 | 1.4983 |
| 1.4002 | 2.96 | 15000 | 1.4992 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "first", "results": []}]} | LennartKeller/longformer-gottbert-base-8192-aw512 | null | [
"transformers",
"pytorch",
"safetensors",
"longformer",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #longformer #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| first
=====
This model is a fine-tuned version of longformer-gottbert-base-8192-aw512- on the a 500 million token subset of the german parts of the OSCAR dataset.
It achieves the following results on the custom evaluation set:
* Loss: 1.4981
Model description
-----------------
The weights of the model are initialized from the german version of Roberta gottbert-base.
The local attention windows have a fixed size of 512 tokens across all layers.
The maximum sequence length is 8192.
Intended uses & limitations
---------------------------
Longformer models enable processing long texts using a mixture of local attention on each subword token and task specific global attention on a subset of the tokens.
Training and evaluation data
----------------------------
The OSCAR dataset is freely avaible corpus of filtered web texts from the Common Crawl in various languages. We used the 2017 version of the dataset.
Training procedure
------------------
The model was trained with masked language modeling for 3 epochs on a customly created 500 million tokens subset of the german proportion of the OSCAR dataset.
It was validated using 5% of the original subset.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 2
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.17.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #safetensors #longformer #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first
This model is a fine-tuned version of [nystromformer-gottbert-base-8192](https://huggingface.co/nystromformer-gottbert-base-8192) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7133 | 0.1 | 500 | 6.6155 |
| 2.7876 | 0.2 | 1000 | 2.5542 |
| 2.1831 | 0.3 | 1500 | 2.0356 |
| 2.0316 | 0.4 | 2000 | 1.8793 |
| 2.0678 | 0.49 | 2500 | 1.7954 |
| 1.8182 | 0.59 | 3000 | 1.7473 |
| 1.7393 | 0.69 | 3500 | 1.7081 |
| 1.7586 | 0.79 | 4000 | 1.6787 |
| 1.7417 | 0.89 | 4500 | 1.6563 |
| 1.8256 | 0.99 | 5000 | 1.6370 |
| 1.7957 | 1.09 | 5500 | 1.6219 |
| 1.6876 | 1.19 | 6000 | 1.6084 |
| 1.7172 | 1.28 | 6500 | 1.5941 |
| 1.6564 | 1.38 | 7000 | 1.5881 |
| 1.732 | 1.48 | 7500 | 1.5757 |
| 1.8272 | 1.58 | 8000 | 1.5692 |
| 1.7951 | 1.68 | 8500 | 1.5617 |
| 1.6669 | 1.78 | 9000 | 1.5546 |
| 1.6489 | 1.88 | 9500 | 1.5458 |
| 1.772 | 1.98 | 10000 | 1.5439 |
| 1.7424 | 2.08 | 10500 | 1.5379 |
| 1.7077 | 2.17 | 11000 | 1.5322 |
| 1.6926 | 2.27 | 11500 | 1.5294 |
| 1.656 | 2.37 | 12000 | 1.5274 |
| 1.7002 | 2.47 | 12500 | 1.5201 |
| 1.7102 | 2.57 | 13000 | 1.5197 |
| 1.7158 | 2.67 | 13500 | 1.5162 |
| 1.6081 | 2.77 | 14000 | 1.5169 |
| 1.754 | 2.87 | 14500 | 1.5140 |
| 1.3588 | 2.96 | 15000 | 1.5135 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "first", "results": []}]} | LennartKeller/nystromformer-gottbert-base-8192 | null | [
"transformers",
"pytorch",
"safetensors",
"nystromformer",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #nystromformer #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| first
=====
This model is a fine-tuned version of nystromformer-gottbert-base-8192 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5135
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 2
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.1+cu113
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #safetensors #nystromformer #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-generation | transformers |
#Kobayashi DialoGPT Model | {"tags": ["conversational"]} | Lenza/DialoGPT-medium-Kobayashi | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Kobayashi DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
summarization | transformers | ## Hyperparameters
{
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 256
}
## Usage
## Results
| key | value |
| --- | ----- |
| eval loss | 4.539857387542725|
| eval_rouge1 |23.7478 |
| eval_rouge2 |7.3616 |
| eval_rougeL |20.6615 |
| eval_rougeLsum |20.7371 |
| eval_gen_len| 16.1806|
|test loss | 4.515065670013428|
| test_rouge1 | 23.7415|
| test_rouge2 | 7.3548|
| test_rougeL | 20.746|
| test_rougeLsum | 20.8149|
| test_gen_len| 16.1926|
| {"language": "es", "license": "apache-2.0", "tags": ["summarization", "spanish", "beto2beto", "encoder-decoder"], "datasets": ["LeoCordoba/CC-NEWS-ES-titles"], "widget": [{"text": "La chocotorta, el tradicional y pr\u00e1ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por cr\u00edticos de restaurants internacionales, a casi 40 a\u00f1os de su creaci\u00f3n. El r\u00e1nking Taste Atlas ubic\u00f3 primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. \u201cEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspir\u00f3 en el famoso tiramis\u00fa italiano. Est\u00e1 elaborado con tres ingredientes b\u00e1sicos argentinos: galletas de chocolate, dulce de leche y queso crema\u201d, explica la p\u00e1gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaci\u00f3n, super\u00f3 tambi\u00e9n a los waffles belgas y el zserb\u00f3 h\u00fangaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompa\u00f1a al list\u00f3n dorado de \u201cpostre n\u00famero uno\u201c, los expertos ense\u00f1an adem\u00e1s c\u00f3mo se hacen las chocotortas, paso por paso. \u201cLas galletas se ablandan en leche y se cubren con una combinaci\u00f3n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, caf\u00e9 o incluso licor de caf\u00e9\u201d, detallan. Por \u00faltimo, adjudican su creaci\u00f3n a una \u201ccampa\u00f1a de m\u00e1rketing\u201d dise\u00f1ada para promover las galletitas ic\u00f3nicas que le dan su nombre. La chocotorta, infaltable en los cumplea\u00f1os argentinos, fue creada en 1982 por una creativa de las agencias m\u00e1s importantes del pa\u00eds, Marit\u00e9 Mabraga\u00f1a."}], "model-index": [{"name": "beto2beto-ccnews-titles-es", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "CCNEWS-ES-titles", "type": "LeoCordoba/CC-NEWS-ES-titles"}, "metrics": [{"type": "rogue-1", "value": 23.7478, "name": "Validation ROGUE-1"}, {"type": "rogue-2", "value": 7.3616, "name": "Validation ROGUE-2"}, {"type": "rogue-l", "value": 20.6615, "name": "Validation ROGUE-L"}, {"type": "rogue-lsum", "value": 20.7371, "name": "Validation ROGUE-Lsum"}, {"type": "rogue-1", "value": 23.7415, "name": "Test ROGUE-1"}, {"type": "rogue-2", "value": 7.3548, "name": "Test ROGUE-2"}, {"type": "rogue-l", "value": 20.746, "name": "Test ROGUE-L"}, {"type": "rogue-lsum", "value": 20.8149, "name": "Test ROGUE-Lsum"}]}]}]} | LeoCordoba/beto2beto-cc-news-es-titles | null | [
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"spanish",
"beto2beto",
"es",
"dataset:LeoCordoba/CC-NEWS-ES-titles",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #safetensors #encoder-decoder #text2text-generation #summarization #spanish #beto2beto #es #dataset-LeoCordoba/CC-NEWS-ES-titles #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
| Hyperparameters
---------------
{
```
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 256
```
}
Usage
-----
Results
-------
| [] | [
"TAGS\n#transformers #pytorch #safetensors #encoder-decoder #text2text-generation #summarization #spanish #beto2beto #es #dataset-LeoCordoba/CC-NEWS-ES-titles #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
summarization | transformers | ## beto2beto-mlsum
This model was trained on the Spanish section of MLSum: https://paperswithcode.com/sota/abstractive-text-summarization-on-mlsum.
## Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"num_train_epochs": 10,
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
## Usage
## Results
| metric | score |
| --- | ----- |
| validation_loss | 2.5021677017211914 |
| validation_rouge1 | 26.1256 |
| validation_rouge2 | 9.2552 |
| validation_rougeL | 21.4899 |
| validation_rougeLsum | 21.8194 |
| test_loss | 2.57672381401062 |
| test_rouge1 | 25.8639 |
| test_rouge2 | 8.911 |
| test_rougeL | 21.2426 |
| test_rougeLsum | 21.5859 |
| {"language": "es", "license": "apache-2.0", "tags": ["summarization", "spanish", "encoder-decoder", "beto"], "datasets": ["mlsum - es"], "widget": [{"text": "La chocotorta, el tradicional y pr\u00e1ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por cr\u00edticos de restaurants internacionales, a casi 40 a\u00f1os de su creaci\u00f3n. El r\u00e1nking Taste Atlas ubic\u00f3 primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. \u201cEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspir\u00f3 en el famoso tiramis\u00fa italiano. Est\u00e1 elaborado con tres ingredientes b\u00e1sicos argentinos: galletas de chocolate, dulce de leche y queso crema\u201d, explica la p\u00e1gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaci\u00f3n, super\u00f3 tambi\u00e9n a los waffles belgas y el zserb\u00f3 h\u00fangaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompa\u00f1a al list\u00f3n dorado de \u201cpostre n\u00famero uno\", los expertos ense\u00f1an adem\u00e1s c\u00f3mo se hacen las chocotortas, paso por paso. \u201cLas galletas se ablandan en leche y se cubren con una combinaci\u00f3n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, caf\u00e9 o incluso licor de caf\u00e9\u201d, detallan. Por \u00faltimo, adjudican su creaci\u00f3n a una \u201ccampa\u00f1a de m\u00e1rketing\u201d dise\u00f1ada para promover las galletitas ic\u00f3nicas que le dan su nombre. La chocotorta, infaltable en los cumplea\u00f1os argentinos, fue creada en 1982 por una creativa de las agencias m\u00e1s importantes del pa\u00eds, Marit\u00e9 Mabraga\u00f1a."}], "model-index": [{"name": "beto2beto-mlsum", "results": [{"task": {"type": "summarization", "name": "abstractive summarization"}, "dataset": {"name": "mlsum-es", "type": "mlsum", "args": "es"}, "metrics": [{"type": "rouge1", "value": 25.8639, "name": "rouge1"}, {"type": "rouge2", "value": 8.911, "name": "rouge2"}, {"type": "rougeL", "value": 21.2426, "name": "rougeL"}, {"type": "rougeLsum", "value": 21.5859, "name": "rougeLsum"}]}]}]} | LeoCordoba/beto2beto-mlsum | null | [
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"spanish",
"beto",
"es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #safetensors #encoder-decoder #text2text-generation #summarization #spanish #beto #es #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| beto2beto-mlsum
---------------
This model was trained on the Spanish section of MLSum: URL
Hyperparameters
---------------
```
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"num_train_epochs": 10,
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
```
}
Usage
-----
Results
-------
| [] | [
"TAGS\n#transformers #pytorch #safetensors #encoder-decoder #text2text-generation #summarization #spanish #beto #es #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers | ## beto2beto
Usage example here: https://colab.research.google.com/drive/18a2ZfF1e_Kyyydlv8INQIkJbv294xcAm?usp=sharing
Entrenado por 3 epochs sobre CC-NEWS-ES (2019), aproximadamente 68.000 steps. Encoder max length: 40•Decoder max length: 128
## Hyperparameters
## Usage
## Results
| key | value |
| --- | ----- |
| test_loss | 2.65148806571960452 |
| {"language": "es", "license": "apache-2.0", "tags": ["text-generation", "spanish", "encoder-decoder", "beto"], "datasets": ["LeoCordoba/CC-NEWS-ES"]} | LeoCordoba/beto2beto | null | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"text-generation",
"spanish",
"beto",
"es",
"dataset:LeoCordoba/CC-NEWS-ES",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #text-generation #spanish #beto #es #dataset-LeoCordoba/CC-NEWS-ES #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| beto2beto
---------
Usage example here: URL
Entrenado por 3 epochs sobre CC-NEWS-ES (2019), aproximadamente 68.000 steps. Encoder max length: 40•Decoder max length: 128
Hyperparameters
---------------
Usage
-----
Results
-------
| [] | [
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #text-generation #spanish #beto #es #dataset-LeoCordoba/CC-NEWS-ES #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
summarization | transformers |
## Hyperparameters
{
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 128
}
## Usage
```
article = """ La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña. """
from transformers import pipeline
summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-ccnews-titles-es")
summarizer(article, min_length=5, max_length=64)
```
## Results
| metric | score |
| --- | ----- |
| eval_loss | 2.879085063934326 |
| eval_rouge1 | 22.6623 |
| eval_rouge2 | 7.7894 |
| eval_rougeL | 19.8015, |
| eval_rougeLsum | 19.8092 |
| eval_gen_len | 17.1839 |
| test_loss | 2.878429412841797 |
| test_rouge1 | 22.9263 |
| test_rouge2 | 7.9146 |
| test_rougeL | 20.0272 |
| test_rougeLsum | 20.0387 |
| test_gen_len | 17.1696 | | {"language": "es", "license": "apache-2.0", "tags": ["summarization", "mt5", "spanish"], "datasets": ["LeoCordoba/CC-NEWS-ES-titles"], "widget": [{"text": "La chocotorta, el tradicional y pr\u00e1ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por cr\u00edticos de restaurants internacionales, a casi 40 a\u00f1os de su creaci\u00f3n. El r\u00e1nking Taste Atlas ubic\u00f3 primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. \u201cEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspir\u00f3 en el famoso tiramis\u00fa italiano. Est\u00e1 elaborado con tres ingredientes b\u00e1sicos argentinos: galletas de chocolate, dulce de leche y queso crema\u201d, explica la p\u00e1gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaci\u00f3n, super\u00f3 tambi\u00e9n a los waffles belgas y el zserb\u00f3 h\u00fangaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompa\u00f1a al list\u00f3n dorado de \u201cpostre n\u00famero uno\u201c, los expertos ense\u00f1an adem\u00e1s c\u00f3mo se hacen las chocotortas, paso por paso. \u201cLas galletas se ablandan en leche y se cubren con una combinaci\u00f3n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, caf\u00e9 o incluso licor de caf\u00e9\u201d, detallan. Por \u00faltimo, adjudican su creaci\u00f3n a una \u201ccampa\u00f1a de m\u00e1rketing\u201d dise\u00f1ada para promover las galletitas ic\u00f3nicas que le dan su nombre. La chocotorta, infaltable en los cumplea\u00f1os argentinos, fue creada en 1982 por una creativa de las agencias m\u00e1s importantes del pa\u00eds, Marit\u00e9 Mabraga\u00f1a."}], "model-index": [{"name": "mt5-small-ccnews-titles-es", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "CCNEWS-ES-titles", "type": "LeoCordoba/CC-NEWS-ES-titles"}, "metrics": [{"type": "rogue-1", "value": 22.6623, "name": "Validation ROGUE-1"}, {"type": "rogue-2", "value": 7.7894, "name": "Validation ROGUE-2"}, {"type": "rogue-l", "value": 19.8015, "name": "Validation ROGUE-L"}, {"type": "rogue-lsum", "value": 19.8092, "name": "Validation ROGUE-Lsum"}, {"type": "rogue-1", "value": 22.9263, "name": "Test ROGUE-1"}, {"type": "rogue-2", "value": 7.9146, "name": "Test ROGUE-2"}, {"type": "rogue-l", "value": 20.0272, "name": "Test ROGUE-L"}, {"type": "rogue-lsum", "value": 20.0387, "name": "Test ROGUE-Lsum"}]}]}]} | LeoCordoba/mt5-small-cc-news-es-titles | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"spanish",
"es",
"dataset:LeoCordoba/CC-NEWS-ES-titles",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #spanish #es #dataset-LeoCordoba/CC-NEWS-ES-titles #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Hyperparameters
---------------
{
```
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 3,
"seed": 7,
"summary_column": "output_text",
"text_column": "text",
"encoder_max_length" : 512,
"decoder_max_length" :36,
"batch_size" : 128
```
}
Usage
-----
Results
-------
| [] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #spanish #es #dataset-LeoCordoba/CC-NEWS-ES-titles #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
summarization | transformers | ## mt5-small-mlsum
This model was trained on the Spanish section of MLSum: https://paperswithcode.com/sota/abstractive-text-summarization-on-mlsum based on mt5-small.
## Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 10,
"output_dir": "/opt/ml/checkpoints",
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"save_strategy": "epoch",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
## Usage
```
article = """ La chocotorta, el tradicional y práctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por críticos de restaurants internacionales, a casi 40 años de su creación. El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. “Este postre argentino sin hornear fue influenciado por la cocina italiana y se inspiró en el famoso tiramisú italiano. Está elaborado con tres ingredientes básicos argentinos: galletas de chocolate, dulce de leche y queso crema”, explica la página web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votación, superó también a los waffles belgas y el zserbó húngaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompaña al listón dorado de “postre número uno", los expertos enseñan además cómo se hacen las chocotortas, paso por paso. “Las galletas se ablandan en leche y se cubren con una combinación de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, café o incluso licor de café”, detallan. Por último, adjudican su creación a una “campaña de márketing” diseñada para promover las galletitas icónicas que le dan su nombre. La chocotorta, infaltable en los cumpleaños argentinos, fue creada en 1982 por una creativa de las agencias más importantes del país, Marité Mabragaña. """
from transformers import pipeline
summarizer = pipeline("summarization", model="LeoCordoba/mt5-small-mlsum")
summarizer(article, min_length=5, max_length=64)
```
result: [{'summary_text': 'El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche'}]
## Results
| metric | score |
| --- | ----- |
| eval_rouge1 | 26.4352 |
| eval_rouge2 | 8.9293 |
| eval_rougeL | 21.2622 |
| eval_rougeLsum | 21.5518 |
| test_rouge1 | 26.0756 |
| test_rouge2 | 8.4669 |
| test_rougeL | 20.8167 |
| test_rougeLsum | 21.0822 |
| {"language": "es", "license": "apache-2.0", "tags": ["summarization", "sagemaker", "mt5", "spanish"], "datasets": ["mlsum - es"], "widget": [{"text": "La chocotorta, el tradicional y pr\u00e1ctico antojo dulce de los argentinos, fue elegida como el mejor postre del mundo por cr\u00edticos de restaurants internacionales, a casi 40 a\u00f1os de su creaci\u00f3n. El r\u00e1nking Taste Atlas ubic\u00f3 primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche, por delante del helado de pistacho italiano y la tarta alemana de manzana. \u201cEste postre argentino sin hornear fue influenciado por la cocina italiana y se inspir\u00f3 en el famoso tiramis\u00fa italiano. Est\u00e1 elaborado con tres ingredientes b\u00e1sicos argentinos: galletas de chocolate, dulce de leche y queso crema\u201d, explica la p\u00e1gina web que exhorta a los turistas de todo el mundo a que prueben la chocotorta. En la votaci\u00f3n, super\u00f3 tambi\u00e9n a los waffles belgas y el zserb\u00f3 h\u00fangaro. A nivel local le sigue el alfajor, con 4,2 puntos contra los 4,7 de la torta. En el texto que acompa\u00f1a al list\u00f3n dorado de \u201cpostre n\u00famero uno\u201c, los expertos ense\u00f1an adem\u00e1s c\u00f3mo se hacen las chocotortas, paso por paso. \u201cLas galletas se ablandan en leche y se cubren con una combinaci\u00f3n de queso crema y dulce de leche. Las formas de la chocotorta pueden variar, mientras que las galletas se pueden remojar con leche con chocolate, caf\u00e9 o incluso licor de caf\u00e9\u201d, detallan. Por \u00faltimo, adjudican su creaci\u00f3n a una \u201ccampa\u00f1a de m\u00e1rketing\u201d dise\u00f1ada para promover las galletitas ic\u00f3nicas que le dan su nombre. La chocotorta, infaltable en los cumplea\u00f1os argentinos, fue creada en 1982 por una creativa de las agencias m\u00e1s importantes del pa\u00eds, Marit\u00e9 Mabraga\u00f1a."}], "model-index": [{"name": "mt5-small-mlsum", "results": [{"task": {"type": "summarization", "name": "abstractive summarization"}, "dataset": {"name": "mlsum-es", "type": "mlsum", "args": "es"}, "metrics": [{"type": "rouge1", "value": 26.0756, "name": "rouge1"}, {"type": "rouge2", "value": 8.4669, "name": "rouge2"}, {"type": "rougeL", "value": 20.8167, "name": "rougeL"}, {"type": "rougeLsum", "value": 21.0822, "name": "rougeLsum"}]}]}]} | LeoCordoba/mt5-small-mlsum | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"sagemaker",
"spanish",
"es",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #jax #safetensors #mt5 #text2text-generation #summarization #sagemaker #spanish #es #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mt5-small-mlsum
---------------
This model was trained on the Spanish section of MLSum: URL based on mt5-small.
Hyperparameters
---------------
{
"dataset\_config": "es",
"dataset\_name": "mlsum",
"do\_eval": true,
"do\_predict": true,
"do\_train": true,
"fp16": true,
"max\_target\_length": 64,
"model\_name\_or\_path": "google/mt5-small",
"num\_train\_epochs": 10,
"output\_dir": "/opt/ml/checkpoints",
"per\_device\_eval\_batch\_size": 4,
"per\_device\_train\_batch\_size": 4,
"predict\_with\_generate": true,
"sagemaker\_container\_log\_level": 20,
"sagemaker\_program": "run\_summarization.py",
"save\_strategy": "epoch",
"seed": 7,
"summary\_column": "summary",
"text\_column": "text"
}
Usage
-----
result: [{'summary\_text': 'El ránking Taste Atlas ubicó primero en su lista al postre insignia local de galletitas, queso crema y dulce de leche'}]
Results
-------
| [] | [
"TAGS\n#transformers #pytorch #jax #safetensors #mt5 #text2text-generation #summarization #sagemaker #spanish #es #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
This is Chandler.
Chandler is your friend too. | {"tags": ["conversational"]} | Leonel/DialoGPT-small-chandler | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This is Chandler.
Chandler is your friend too. | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
# Michael DialoGPT Model | {"tags": ["conversational"]} | Leostronkest/DialoGPT-small-michael | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Michael DialoGPT Model | [
"# Michael DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Michael DialoGPT Model"
] |
text-generation | transformers |
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Does money buy happiness? |
| Bot | Depends how much money you spend on it .|
|User | What is the best way to buy happiness ? |
| Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
|User |This is so difficult ! |
| Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
| {"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"} | Leostronkest/DialoGPT | null | [
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"conversational",
"arxiv:1911.00536",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.00536"
] | [] | TAGS
#transformers #pytorch #tf #jax #gpt2 #text-generation #conversational #arxiv-1911.00536 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
------------------------------------------------------------------------------
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
Please find the information about preprocessing, training and full details of the DialoGPT in the original DialoGPT repository
ArXiv paper: URL
### How to use
Now we are ready to try out how the model works as a chatting partner!
| [
"### How to use\n\n\nNow we are ready to try out how the model works as a chatting partner!"
] | [
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #conversational #arxiv-1911.00536 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### How to use\n\n\nNow we are ready to try out how the model works as a chatting partner!"
] |
fill-mask | transformers | # scibert-wechsel-korean
Scibert(🇺🇸) converted into Korean(🇰🇷) using WECHSEL technique.
### Description
- SciBERT is trained on papers from the corpus of semanticscholar.org. Corpus size is 1.14M papers, 3.1B tokens.
- Wechsel is converting embedding layer's subword tokens from source language to target language.
- SciBERT trained with English language is converted into Korean langauge using Wechsel technique.
- Korean tokenizer is selected with KLUE PLMs' tokenizers due to its similar vocab size(32000) and performance.
### Reference
- [Scibert](https://github.com/allenai/scibert)
- [WECHSEL](https://github.com/CPJKU/wechsel)
- [Korean Language Understanding Evaluation](https://github.com/KLUE-benchmark/KLUE) | {} | LeverageX/scibert-wechsel-korean | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| # scibert-wechsel-korean
Scibert(🇺🇸) converted into Korean(🇰🇷) using WECHSEL technique.
### Description
- SciBERT is trained on papers from the corpus of URL. Corpus size is 1.14M papers, 3.1B tokens.
- Wechsel is converting embedding layer's subword tokens from source language to target language.
- SciBERT trained with English language is converted into Korean langauge using Wechsel technique.
- Korean tokenizer is selected with KLUE PLMs' tokenizers due to its similar vocab size(32000) and performance.
### Reference
- Scibert
- WECHSEL
- Korean Language Understanding Evaluation | [
"# scibert-wechsel-korean\n\nScibert(🇺🇸) converted into Korean(🇰🇷) using WECHSEL technique.",
"### Description\n- SciBERT is trained on papers from the corpus of URL. Corpus size is 1.14M papers, 3.1B tokens. \n- Wechsel is converting embedding layer's subword tokens from source language to target language. \n- SciBERT trained with English language is converted into Korean langauge using Wechsel technique.\n- Korean tokenizer is selected with KLUE PLMs' tokenizers due to its similar vocab size(32000) and performance.",
"### Reference\n- Scibert\n- WECHSEL\n- Korean Language Understanding Evaluation"
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# scibert-wechsel-korean\n\nScibert(🇺🇸) converted into Korean(🇰🇷) using WECHSEL technique.",
"### Description\n- SciBERT is trained on papers from the corpus of URL. Corpus size is 1.14M papers, 3.1B tokens. \n- Wechsel is converting embedding layer's subword tokens from source language to target language. \n- SciBERT trained with English language is converted into Korean langauge using Wechsel technique.\n- Korean tokenizer is selected with KLUE PLMs' tokenizers due to its similar vocab size(32000) and performance.",
"### Reference\n- Scibert\n- WECHSEL\n- Korean Language Understanding Evaluation"
] |
text-generation | transformers |
# Jake99 DialoGPT model | {"tags": ["conversational"]} | Leviii03/Dialogpt-small-Jake99 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jake99 DialoGPT model | [
"# Jake99 DialoGPT model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jake99 DialoGPT model"
] |
text-classification | transformers | [bert-base-uncased](https://huggingface.co/bert-base-uncased) fine-tuned on the [QNLI](https://huggingface.co/datasets/glue) dataset for 2 epochs.
The fine-tuning process was performed on 2x NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
```
max_seq_length=512
per_device_train_batch_size=8
gradient_accumulation_steps=2
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
eval_accuracy = 0.916895
## More information
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.
(source: https://paperswithcode.com/dataset/qnli) | {} | Li/bert-base-uncased-qnli | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| bert-base-uncased fine-tuned on the QNLI dataset for 2 epochs.
The fine-tuning process was performed on 2x NVIDIA GeForce GTX 1080 Ti GPUs (11GB). The parameters are:
## Evaluation results
eval_accuracy = 0.916895
## More information
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.
(source: URL | [
"## Evaluation results\n\neval_accuracy = 0.916895",
"## More information\n\nThe QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.\n\n(source: URL"
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"## Evaluation results\n\neval_accuracy = 0.916895",
"## More information\n\nThe QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark.\n\n(source: URL"
] |
question-answering | transformers | [roberta-base](https://huggingface.co/roberta-base) fine-tuned on the [SQuAD2](https://rajpurkar.github.io/SQuAD-explorer) dataset for 2 epochs.
The fine-tuning process was performed on a single NVIDIA Tesla T4 GPU (15GB). The hyperparameters are:
```
max_seq_length=512
per_device_train_batch_size=8
gradient_accumulation_steps=4
total train batch size (w. parallel, distributed & accumulation) = 32
learning_rate=3e-5
```
## Evaluation results
```
"eval_exact": 80.33352985766024,
"eval_f1": 83.38322909593009,
"eval_HasAns_exact": 77.81713900134953,
"eval_HasAns_f1": 83.925283241562,
"eval_HasAns_total": 5928,
"eval_NoAns_exact": 82.84272497897393,
"eval_NoAns_f1": 82.84272497897393,
"eval_NoAns_total": 5945,
"eval_best_exact": 80.33352985766024,
"eval_best_exact_thresh": 0.0,
"eval_best_f1": 83.38322909593005,
"eval_best_f1_thresh": 0.0,
"eval_samples": 11955,
"eval_total": 11873,
```
## More information
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. (https://rajpurkar.github.io/SQuAD-explorer/) | {} | Li/roberta-base-squad2 | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #roberta #question-answering #endpoints_compatible #region-us
| roberta-base fine-tuned on the SQuAD2 dataset for 2 epochs.
The fine-tuning process was performed on a single NVIDIA Tesla T4 GPU (15GB). The hyperparameters are:
## Evaluation results
## More information
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. (URL | [
"## Evaluation results",
"## More information\n\nStanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n\nSQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. (URL"
] | [
"TAGS\n#transformers #pytorch #safetensors #roberta #question-answering #endpoints_compatible #region-us \n",
"## Evaluation results",
"## More information\n\nStanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n\nSQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. (URL"
] |
text-classification | transformers | At its core it uses an BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking.
| {} | LilaBoualili/bert-sim-pair | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| At its core it uses an BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our github repository for a usage example for ad hoc ranking.
| [] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | At its core it uses a BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking. | {} | LilaBoualili/bert-vanilla | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| At its core it uses a BERT-Base model (bert-base-uncased) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes.
Refer to our github repository for a usage example for ad hoc ranking. | [] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | At its core it uses an ELECTRA-Base model (google/electra-base-discriminator) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes but it follows the same classification layer defined for BERT similarly to the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking. | {} | LilaBoualili/electra-sim-pair | null | [
"transformers",
"pytorch",
"tf",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us
| At its core it uses an ELECTRA-Base model (google/electra-base-discriminator) fine-tuned on the MS MARCO passage classification task using the Sim-Pair marking strategy that highlights exact term matches between the query and the passage via marker tokens (#). It can be loaded using the TF/AutoModelForSequenceClassification classes but it follows the same classification layer defined for BERT similarly to the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation.
Refer to our github repository for a usage example for ad hoc ranking. | [] | [
"TAGS\n#transformers #pytorch #tf #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | At its core it uses an ELECTRA-Base model (google/electra-base-discriminator) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes but it follows the same classification layer defined for BERT similarly to the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation.
Refer to our [github repository](https://github.com/BOUALILILila/ExactMatchMarking) for a usage example for ad hoc ranking. | {} | LilaBoualili/electra-vanilla | null | [
"transformers",
"pytorch",
"tf",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us
| At its core it uses an ELECTRA-Base model (google/electra-base-discriminator) fine-tuned on the MS MARCO passage classification task. It can be loaded using the TF/AutoModelForSequenceClassification classes but it follows the same classification layer defined for BERT similarly to the TFElectraRelevanceHead in the Capreolus BERT-MaxP implementation.
Refer to our github repository for a usage example for ad hoc ranking. | [] | [
"TAGS\n#transformers #pytorch #tf #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null | okuma lan kardeş,im | {} | LinuxMac/denema | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| okuma lan kardeş,im | [] | [
"TAGS\n#region-us \n"
] |
text2text-generation | transformers | ## End-to-end Conversational search model
A end-to-end system of conversational search system for online shopping. It was introduced in [this paper](https://arxiv.org/abs/2109.05460) published on conference EMNLP.
## Model description
ConvSearch is an end-to-end conversational search system that deeply combines the dialog and search system to improve the search performance. In particular, the Product Search module leverages both structured product attributes and unstructured product text (e.g. profile), where the product text may contain phrases matching with utterances when schema is incomplete or when a product attribute value is missing. Putting together, our system has the advantage of both reduced error accumulation along individual modules, and enhanced robustness against product schema/knowledge gaps.
## Intended uses & limitations
You can use the raw model to understand the dialog between consumer and server. The concatenated dialogs can be parsed into intents (e.g. inform, request, buy, et al.) and attributes of products.
You can also fine-tune this model on similar down-stream tasks, such as a dialog system for shopping in your scenario or customer service system. Since our model is seq-to-seq, any dialog system that can be reformed to seq-to-seq can be implemented based on this model.
## How to use
You can use this model directly with:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/ConvSearch_QU")
model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/ConvSearch_QU")
## Training data
ConvSearch was pretrained on a dialog corpus with 49,999 dialogs/942,766 turns.
| {} | LiqiangXiao/ConvSearch_QU | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:2109.05460",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2109.05460"
] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #arxiv-2109.05460 #autotrain_compatible #endpoints_compatible #region-us
| ## End-to-end Conversational search model
A end-to-end system of conversational search system for online shopping. It was introduced in this paper published on conference EMNLP.
## Model description
ConvSearch is an end-to-end conversational search system that deeply combines the dialog and search system to improve the search performance. In particular, the Product Search module leverages both structured product attributes and unstructured product text (e.g. profile), where the product text may contain phrases matching with utterances when schema is incomplete or when a product attribute value is missing. Putting together, our system has the advantage of both reduced error accumulation along individual modules, and enhanced robustness against product schema/knowledge gaps.
## Intended uses & limitations
You can use the raw model to understand the dialog between consumer and server. The concatenated dialogs can be parsed into intents (e.g. inform, request, buy, et al.) and attributes of products.
You can also fine-tune this model on similar down-stream tasks, such as a dialog system for shopping in your scenario or customer service system. Since our model is seq-to-seq, any dialog system that can be reformed to seq-to-seq can be implemented based on this model.
## How to use
You can use this model directly with:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/ConvSearch_QU")
model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/ConvSearch_QU")
## Training data
ConvSearch was pretrained on a dialog corpus with 49,999 dialogs/942,766 turns.
| [
"## End-to-end Conversational search model\nA end-to-end system of conversational search system for online shopping. It was introduced in this paper published on conference EMNLP.",
"## Model description\nConvSearch is an end-to-end conversational search system that deeply combines the dialog and search system to improve the search performance. In particular, the Product Search module leverages both structured product attributes and unstructured product text (e.g. profile), where the product text may contain phrases matching with utterances when schema is incomplete or when a product attribute value is missing. Putting together, our system has the advantage of both reduced error accumulation along individual modules, and enhanced robustness against product schema/knowledge gaps.",
"## Intended uses & limitations \nYou can use the raw model to understand the dialog between consumer and server. The concatenated dialogs can be parsed into intents (e.g. inform, request, buy, et al.) and attributes of products.\n\nYou can also fine-tune this model on similar down-stream tasks, such as a dialog system for shopping in your scenario or customer service system. Since our model is seq-to-seq, any dialog system that can be reformed to seq-to-seq can be implemented based on this model.",
"## How to use \nYou can use this model directly with:\n\n \n from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n tokenizer = AutoTokenizer.from_pretrained(\"LiqiangXiao/ConvSearch_QU\")\n model = AutoModelForSeq2SeqLM.from_pretrained(\"LiqiangXiao/ConvSearch_QU\")",
"## Training data\nConvSearch was pretrained on a dialog corpus with 49,999 dialogs/942,766 turns."
] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #arxiv-2109.05460 #autotrain_compatible #endpoints_compatible #region-us \n",
"## End-to-end Conversational search model\nA end-to-end system of conversational search system for online shopping. It was introduced in this paper published on conference EMNLP.",
"## Model description\nConvSearch is an end-to-end conversational search system that deeply combines the dialog and search system to improve the search performance. In particular, the Product Search module leverages both structured product attributes and unstructured product text (e.g. profile), where the product text may contain phrases matching with utterances when schema is incomplete or when a product attribute value is missing. Putting together, our system has the advantage of both reduced error accumulation along individual modules, and enhanced robustness against product schema/knowledge gaps.",
"## Intended uses & limitations \nYou can use the raw model to understand the dialog between consumer and server. The concatenated dialogs can be parsed into intents (e.g. inform, request, buy, et al.) and attributes of products.\n\nYou can also fine-tune this model on similar down-stream tasks, such as a dialog system for shopping in your scenario or customer service system. Since our model is seq-to-seq, any dialog system that can be reformed to seq-to-seq can be implemented based on this model.",
"## How to use \nYou can use this model directly with:\n\n \n from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n tokenizer = AutoTokenizer.from_pretrained(\"LiqiangXiao/ConvSearch_QU\")\n model = AutoModelForSeq2SeqLM.from_pretrained(\"LiqiangXiao/ConvSearch_QU\")",
"## Training data\nConvSearch was pretrained on a dialog corpus with 49,999 dialogs/942,766 turns."
] |
text2text-generation | transformers | ## Copy-or-Rewrite
This repository contains the code of paper "Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning". A model built for human-like summarization task and trained with Actor-critic Reinforcement Learning. This work significantly improved the ROUGE scores on CNN/DM dataset by 1.7 and augmented the informativity and readability of generated summaries. It implemented a more human-like workflow for summarization task solving the information loss problem. It contains a novel hierarchical transformer module to represent article in both word and sentence level, a new reinforcement learning method that can effectively train two-step model.
## Model description
Copy-or-Rewrite is a model to improve the workflow of summarization models. Existing methods that adopt an extract-then-abstract strategy have achieved impressive results, yet they suffer from the information loss in the abstraction step because they compress all the selected sentences without distinguish. Especially when the whole sentence is summary-worthy, salient content would be lost by compression. To address this problem, we pro- pose HYSUM, a hybrid framework for summarization that can flexibly switch between copying sentence and rewriting sentence according to the degree of redundancy. In this way, our approach can effectively combine the advantages of two branches of summarization, juggling informativity and conciseness. Moreover, we based on Hierarchical Reinforcement Learning, propose an end-to-end reinforcing method to bridge together the extraction module and rewriting module, which can enhance the cooperation between them. Automatic evaluation shows that our approach significantly outperforms the state-of-the-arts on the CNN/DailyMail corpus. Human evaluation also demonstrates that our generated summaries are more informative and concise than popular models.
## Intended uses & limitations
With this repository, you can generate informative and concise summaries for input articles. For other tasks, you may used the hierarchical representation module to effectively represent the article. The parameters of the model is pre-trained on CNN/DM dataset. You may need to fine-tune it other your own dataset when needed.
## How to use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/summarization")
## Training data
This model used the non-anonymous version of CNN/Daily Mail dataset.
## BibTeX entry and citation info
@inproceedings{DBLP:conf/aaai/XiaoWHJ20,
author = {Liqiang Xiao and
Lu Wang and
Hao He and
Yaohui Jin},
title = {Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement
Learning},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {9306--9313},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6470},
timestamp = {Tue, 02 Feb 2021 08:00:14 +0100},
biburl = {https://dblp.org/rec/conf/aaai/XiaoWHJ20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
| {} | LiqiangXiao/summarization | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| ## Copy-or-Rewrite
This repository contains the code of paper "Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning". A model built for human-like summarization task and trained with Actor-critic Reinforcement Learning. This work significantly improved the ROUGE scores on CNN/DM dataset by 1.7 and augmented the informativity and readability of generated summaries. It implemented a more human-like workflow for summarization task solving the information loss problem. It contains a novel hierarchical transformer module to represent article in both word and sentence level, a new reinforcement learning method that can effectively train two-step model.
## Model description
Copy-or-Rewrite is a model to improve the workflow of summarization models. Existing methods that adopt an extract-then-abstract strategy have achieved impressive results, yet they suffer from the information loss in the abstraction step because they compress all the selected sentences without distinguish. Especially when the whole sentence is summary-worthy, salient content would be lost by compression. To address this problem, we pro- pose HYSUM, a hybrid framework for summarization that can flexibly switch between copying sentence and rewriting sentence according to the degree of redundancy. In this way, our approach can effectively combine the advantages of two branches of summarization, juggling informativity and conciseness. Moreover, we based on Hierarchical Reinforcement Learning, propose an end-to-end reinforcing method to bridge together the extraction module and rewriting module, which can enhance the cooperation between them. Automatic evaluation shows that our approach significantly outperforms the state-of-the-arts on the CNN/DailyMail corpus. Human evaluation also demonstrates that our generated summaries are more informative and concise than popular models.
## Intended uses & limitations
With this repository, you can generate informative and concise summaries for input articles. For other tasks, you may used the hierarchical representation module to effectively represent the article. The parameters of the model is pre-trained on CNN/DM dataset. You may need to fine-tune it other your own dataset when needed.
## How to use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/summarization")
## Training data
This model used the non-anonymous version of CNN/Daily Mail dataset.
## BibTeX entry and citation info
@inproceedings{DBLP:conf/aaai/XiaoWHJ20,
author = {Liqiang Xiao and
Lu Wang and
Hao He and
Yaohui Jin},
title = {Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement
Learning},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {9306--9313},
publisher = {{AAAI} Press},
year = {2020},
url = {URL
timestamp = {Tue, 02 Feb 2021 08:00:14 +0100},
biburl = {URL
bibsource = {dblp computer science bibliography, URL}
}
| [
"## Copy-or-Rewrite\nThis repository contains the code of paper \"Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning\". A model built for human-like summarization task and trained with Actor-critic Reinforcement Learning. This work significantly improved the ROUGE scores on CNN/DM dataset by 1.7 and augmented the informativity and readability of generated summaries. It implemented a more human-like workflow for summarization task solving the information loss problem. It contains a novel hierarchical transformer module to represent article in both word and sentence level, a new reinforcement learning method that can effectively train two-step model.",
"## Model description \nCopy-or-Rewrite is a model to improve the workflow of summarization models. Existing methods that adopt an extract-then-abstract strategy have achieved impressive results, yet they suffer from the information loss in the abstraction step because they compress all the selected sentences without distinguish. Especially when the whole sentence is summary-worthy, salient content would be lost by compression. To address this problem, we pro- pose HYSUM, a hybrid framework for summarization that can flexibly switch between copying sentence and rewriting sentence according to the degree of redundancy. In this way, our approach can effectively combine the advantages of two branches of summarization, juggling informativity and conciseness. Moreover, we based on Hierarchical Reinforcement Learning, propose an end-to-end reinforcing method to bridge together the extraction module and rewriting module, which can enhance the cooperation between them. Automatic evaluation shows that our approach significantly outperforms the state-of-the-arts on the CNN/DailyMail corpus. Human evaluation also demonstrates that our generated summaries are more informative and concise than popular models.",
"## Intended uses & limitations\nWith this repository, you can generate informative and concise summaries for input articles. For other tasks, you may used the hierarchical representation module to effectively represent the article. The parameters of the model is pre-trained on CNN/DM dataset. You may need to fine-tune it other your own dataset when needed.",
"## How to use\n\n from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n \n tokenizer = AutoTokenizer.from_pretrained(\"LiqiangXiao/summarization\")\n \n model = AutoModelForSeq2SeqLM.from_pretrained(\"LiqiangXiao/summarization\")",
"## Training data\nThis model used the non-anonymous version of CNN/Daily Mail dataset.",
"## BibTeX entry and citation info\n @inproceedings{DBLP:conf/aaai/XiaoWHJ20,\n author = {Liqiang Xiao and\n Lu Wang and\n Hao He and\n Yaohui Jin},\n title = {Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement\n Learning},\n booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}\n 2020, The Thirty-Second Innovative Applications of Artificial Intelligence\n Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational\n Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,\n February 7-12, 2020},\n pages = {9306--9313},\n publisher = {{AAAI} Press},\n year = {2020},\n url = {URL\n timestamp = {Tue, 02 Feb 2021 08:00:14 +0100},\n biburl = {URL\n bibsource = {dblp computer science bibliography, URL}\n }"
] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"## Copy-or-Rewrite\nThis repository contains the code of paper \"Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning\". A model built for human-like summarization task and trained with Actor-critic Reinforcement Learning. This work significantly improved the ROUGE scores on CNN/DM dataset by 1.7 and augmented the informativity and readability of generated summaries. It implemented a more human-like workflow for summarization task solving the information loss problem. It contains a novel hierarchical transformer module to represent article in both word and sentence level, a new reinforcement learning method that can effectively train two-step model.",
"## Model description \nCopy-or-Rewrite is a model to improve the workflow of summarization models. Existing methods that adopt an extract-then-abstract strategy have achieved impressive results, yet they suffer from the information loss in the abstraction step because they compress all the selected sentences without distinguish. Especially when the whole sentence is summary-worthy, salient content would be lost by compression. To address this problem, we pro- pose HYSUM, a hybrid framework for summarization that can flexibly switch between copying sentence and rewriting sentence according to the degree of redundancy. In this way, our approach can effectively combine the advantages of two branches of summarization, juggling informativity and conciseness. Moreover, we based on Hierarchical Reinforcement Learning, propose an end-to-end reinforcing method to bridge together the extraction module and rewriting module, which can enhance the cooperation between them. Automatic evaluation shows that our approach significantly outperforms the state-of-the-arts on the CNN/DailyMail corpus. Human evaluation also demonstrates that our generated summaries are more informative and concise than popular models.",
"## Intended uses & limitations\nWith this repository, you can generate informative and concise summaries for input articles. For other tasks, you may used the hierarchical representation module to effectively represent the article. The parameters of the model is pre-trained on CNN/DM dataset. You may need to fine-tune it other your own dataset when needed.",
"## How to use\n\n from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n \n tokenizer = AutoTokenizer.from_pretrained(\"LiqiangXiao/summarization\")\n \n model = AutoModelForSeq2SeqLM.from_pretrained(\"LiqiangXiao/summarization\")",
"## Training data\nThis model used the non-anonymous version of CNN/Daily Mail dataset.",
"## BibTeX entry and citation info\n @inproceedings{DBLP:conf/aaai/XiaoWHJ20,\n author = {Liqiang Xiao and\n Lu Wang and\n Hao He and\n Yaohui Jin},\n title = {Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement\n Learning},\n booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}\n 2020, The Thirty-Second Innovative Applications of Artificial Intelligence\n Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational\n Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,\n February 7-12, 2020},\n pages = {9306--9313},\n publisher = {{AAAI} Press},\n year = {2020},\n url = {URL\n timestamp = {Tue, 02 Feb 2021 08:00:14 +0100},\n biburl = {URL\n bibsource = {dblp computer science bibliography, URL}\n }"
] |
text-classification | transformers |
# bert-base-cased-sentiment
Es un modelo de BERT (bert-base-cased) afinado para el analisis de sentimientos para dos clases.
El sentimiento solo se define como positivo negativo según sea el caso de la oración suministrada.
## Training data
El set de datos utilizado para el entrenamiento del modelo fue a traves de una recopilación de reseñas de amazón, el cual se puede descargar desde el autor original en Kaggle [Adam Bittlingmayer](https://www.kaggle.com/bittlingmayer/amazonreviews) Amazon Reviews for Sentiment Analysis.
El numero de datos fue solo de 40 000 oraciones de las cuales solo se tomaron las primeras 100 palabras para conformar cada una de las oraciones.
## Accuaracy
El modelo afinado fue sometido a 3 pruebas para conocer su precisión.
- La primera prueba fue en un set de datos de Reseñas de hoteles
| Accuracy (Precisión) |
| -------- |
| 95% |
- La segunda prueba fue en un set de datos de Reseñas de comida
| Accuracy (Precisión) |
| -------- |
| 88% |
- La tercera prueba fue en un set de datos de Sentimientos generales
| Accuracy (Precisión) |
| -------- |
| 65% |
## Contact
Contacto a traves de github [Murdoocc7](https://github.com/murdoocc) | {"language": ["en"], "pipeline_tag": "text-classification"} | Littlejohn/analisis_sentimientos | null | [
"transformers",
"text-classification",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #text-classification #en #endpoints_compatible #region-us
|
# bert-base-cased-sentiment
Es un modelo de BERT (bert-base-cased) afinado para el analisis de sentimientos para dos clases.
El sentimiento solo se define como positivo negativo según sea el caso de la oración suministrada.
## Training data
El set de datos utilizado para el entrenamiento del modelo fue a traves de una recopilación de reseñas de amazón, el cual se puede descargar desde el autor original en Kaggle Adam Bittlingmayer Amazon Reviews for Sentiment Analysis.
El numero de datos fue solo de 40 000 oraciones de las cuales solo se tomaron las primeras 100 palabras para conformar cada una de las oraciones.
## Accuaracy
El modelo afinado fue sometido a 3 pruebas para conocer su precisión.
- La primera prueba fue en un set de datos de Reseñas de hoteles
| Accuracy (Precisión) |
| -------- |
| 95% |
- La segunda prueba fue en un set de datos de Reseñas de comida
| Accuracy (Precisión) |
| -------- |
| 88% |
- La tercera prueba fue en un set de datos de Sentimientos generales
| Accuracy (Precisión) |
| -------- |
| 65% |
## Contact
Contacto a traves de github Murdoocc7 | [
"# bert-base-cased-sentiment\n\nEs un modelo de BERT (bert-base-cased) afinado para el analisis de sentimientos para dos clases.\n\nEl sentimiento solo se define como positivo negativo según sea el caso de la oración suministrada.",
"## Training data\n\nEl set de datos utilizado para el entrenamiento del modelo fue a traves de una recopilación de reseñas de amazón, el cual se puede descargar desde el autor original en Kaggle Adam Bittlingmayer Amazon Reviews for Sentiment Analysis.\n\nEl numero de datos fue solo de 40 000 oraciones de las cuales solo se tomaron las primeras 100 palabras para conformar cada una de las oraciones.",
"## Accuaracy\nEl modelo afinado fue sometido a 3 pruebas para conocer su precisión.\n\n- La primera prueba fue en un set de datos de Reseñas de hoteles\n| Accuracy (Precisión) |\n| -------- | \n| 95% | \n\n- La segunda prueba fue en un set de datos de Reseñas de comida\n| Accuracy (Precisión) |\n| -------- | \n| 88% | \n\n- La tercera prueba fue en un set de datos de Sentimientos generales\n| Accuracy (Precisión) |\n| -------- |\n| 65% |",
"## Contact\n\nContacto a traves de github Murdoocc7"
] | [
"TAGS\n#transformers #text-classification #en #endpoints_compatible #region-us \n",
"# bert-base-cased-sentiment\n\nEs un modelo de BERT (bert-base-cased) afinado para el analisis de sentimientos para dos clases.\n\nEl sentimiento solo se define como positivo negativo según sea el caso de la oración suministrada.",
"## Training data\n\nEl set de datos utilizado para el entrenamiento del modelo fue a traves de una recopilación de reseñas de amazón, el cual se puede descargar desde el autor original en Kaggle Adam Bittlingmayer Amazon Reviews for Sentiment Analysis.\n\nEl numero de datos fue solo de 40 000 oraciones de las cuales solo se tomaron las primeras 100 palabras para conformar cada una de las oraciones.",
"## Accuaracy\nEl modelo afinado fue sometido a 3 pruebas para conocer su precisión.\n\n- La primera prueba fue en un set de datos de Reseñas de hoteles\n| Accuracy (Precisión) |\n| -------- | \n| 95% | \n\n- La segunda prueba fue en un set de datos de Reseñas de comida\n| Accuracy (Precisión) |\n| -------- | \n| 88% | \n\n- La tercera prueba fue en un set de datos de Sentimientos generales\n| Accuracy (Precisión) |\n| -------- |\n| 65% |",
"## Contact\n\nContacto a traves de github Murdoocc7"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.