pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALL03
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1951 | 1.0 | 1542 | 2.0285 |
| 2.0918 | 2.0 | 3084 | 1.9989 |
| 2.0562 | 3.0 | 4626 | 2.0162 |
| 2.0012 | 4.0 | 6168 | 1.9330 |
| 1.9705 | 5.0 | 7710 | 1.9151 |
| 1.9571 | 6.0 | 9252 | 1.9419 |
| 1.9113 | 7.0 | 10794 | 1.9175 |
| 1.8988 | 8.0 | 12336 | 1.9143 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALL03", "results": []}]} | Jeska/BertjeWDialDataALL03 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALL03
====================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9459
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 8.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 8.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 8.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
142,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 8.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALL04
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2954 | 1.0 | 1542 | 2.0372 |
| 2.2015 | 2.0 | 3084 | 2.0104 |
| 2.1661 | 3.0 | 4626 | 2.0372 |
| 2.1186 | 4.0 | 6168 | 1.9549 |
| 2.0939 | 5.0 | 7710 | 1.9438 |
| 2.0867 | 6.0 | 9252 | 1.9648 |
| 2.0462 | 7.0 | 10794 | 1.9465 |
| 2.0315 | 8.0 | 12336 | 1.9412 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALL04", "results": []}]} | Jeska/BertjeWDialDataALL04 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALL04
====================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9717
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2122 | 1.0 | 871 | 2.0469 |
| 2.0961 | 2.0 | 1742 | 2.0117 |
| 2.0628 | 3.0 | 2613 | 2.0040 |
| 2.0173 | 4.0 | 3484 | 1.9901 |
| 1.9772 | 5.0 | 4355 | 1.9711 |
| 1.9455 | 6.0 | 5226 | 1.9785 |
| 1.917 | 7.0 | 6097 | 1.9380 |
| 1.8933 | 8.0 | 6968 | 1.9651 |
| 1.8708 | 9.0 | 7839 | 1.9915 |
| 1.862 | 10.0 | 8710 | 1.9310 |
| 1.8545 | 11.0 | 9581 | 1.9422 |
| 1.8231 | 12.0 | 10452 | 1.9310 |
| 1.8141 | 13.0 | 11323 | 1.9362 |
| 1.7939 | 14.0 | 12194 | 1.9334 |
| 1.8035 | 15.0 | 13065 | 1.9197 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALLQonly", "results": []}]} | Jeska/BertjeWDialDataALLQonly | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALLQonly
=======================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9438
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly02
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2438 | 1.0 | 871 | 2.1122 |
| 2.1235 | 2.0 | 1742 | 2.0784 |
| 2.0712 | 3.0 | 2613 | 2.0679 |
| 2.0034 | 4.0 | 3484 | 2.0546 |
| 1.9375 | 5.0 | 4355 | 2.0277 |
| 1.8911 | 6.0 | 5226 | 2.0364 |
| 1.8454 | 7.0 | 6097 | 1.9812 |
| 1.808 | 8.0 | 6968 | 2.0175 |
| 1.7716 | 9.0 | 7839 | 2.0286 |
| 1.7519 | 10.0 | 8710 | 1.9653 |
| 1.7358 | 11.0 | 9581 | 1.9817 |
| 1.7084 | 12.0 | 10452 | 1.9633 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALLQonly02", "results": []}]} | Jeska/BertjeWDialDataALLQonly02 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALLQonly02
=========================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9043
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 12.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly03
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 24.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 435 | 2.0751 |
| 2.1982 | 2.0 | 870 | 2.0465 |
| 2.0841 | 3.0 | 1305 | 2.0420 |
| 2.0374 | 4.0 | 1740 | 2.0325 |
| 1.9731 | 5.0 | 2175 | 2.0075 |
| 1.9248 | 6.0 | 2610 | 2.0219 |
| 1.8848 | 7.0 | 3045 | 1.9770 |
| 1.8848 | 8.0 | 3480 | 2.0093 |
| 1.8419 | 9.0 | 3915 | 2.0298 |
| 1.804 | 10.0 | 4350 | 1.9681 |
| 1.7817 | 11.0 | 4785 | 1.9938 |
| 1.7472 | 12.0 | 5220 | 1.9654 |
| 1.7075 | 13.0 | 5655 | 1.9797 |
| 1.6976 | 14.0 | 6090 | 1.9691 |
| 1.6748 | 15.0 | 6525 | 1.9568 |
| 1.6748 | 16.0 | 6960 | 1.9618 |
| 1.6528 | 17.0 | 7395 | 1.9843 |
| 1.6335 | 18.0 | 7830 | 1.9265 |
| 1.6179 | 19.0 | 8265 | 1.9598 |
| 1.5992 | 20.0 | 8700 | 1.9331 |
| 1.583 | 21.0 | 9135 | 1.9795 |
| 1.5699 | 22.0 | 9570 | 2.0073 |
| 1.5703 | 23.0 | 10005 | 1.9308 |
| 1.5703 | 24.0 | 10440 | 1.9285 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALLQonly03", "results": []}]} | Jeska/BertjeWDialDataALLQonly03 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALLQonly03
=========================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9995
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 24.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 24.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 24.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 24.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly05
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.9349 | 1.0 | 871 | 2.9642 |
| 2.9261 | 2.0 | 1742 | 2.9243 |
| 2.8409 | 3.0 | 2613 | 2.8895 |
| 2.7308 | 4.0 | 3484 | 2.8394 |
| 2.6042 | 5.0 | 4355 | 2.7703 |
| 2.4671 | 6.0 | 5226 | 2.7522 |
| 2.3481 | 7.0 | 6097 | 2.6339 |
| 2.2493 | 8.0 | 6968 | 2.6224 |
| 2.1233 | 9.0 | 7839 | 2.5637 |
| 2.0194 | 10.0 | 8710 | 2.4896 |
| 1.9178 | 11.0 | 9581 | 2.4689 |
| 1.8588 | 12.0 | 10452 | 2.4663 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALLQonly05", "results": []}]} | Jeska/BertjeWDialDataALLQonly05 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALLQonly05
=========================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3921
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 12.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly07
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 18.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3589 | 1.0 | 871 | 2.2805 |
| 2.2563 | 2.0 | 1742 | 2.2501 |
| 2.1936 | 3.0 | 2613 | 2.2419 |
| 2.11 | 4.0 | 3484 | 2.2301 |
| 2.0311 | 5.0 | 4355 | 2.2320 |
| 1.969 | 6.0 | 5226 | 2.2276 |
| 1.9148 | 7.0 | 6097 | 2.1621 |
| 1.8569 | 8.0 | 6968 | 2.1876 |
| 1.7978 | 9.0 | 7839 | 2.2011 |
| 1.7602 | 10.0 | 8710 | 2.1280 |
| 1.7166 | 11.0 | 9581 | 2.1644 |
| 1.6651 | 12.0 | 10452 | 2.1246 |
| 1.6141 | 13.0 | 11323 | 2.1264 |
| 1.5759 | 14.0 | 12194 | 2.1143 |
| 1.5478 | 15.0 | 13065 | 2.0982 |
| 1.5311 | 16.0 | 13936 | 2.0993 |
| 1.5187 | 17.0 | 14807 | 2.0979 |
| 1.4809 | 18.0 | 15678 | 2.0338 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALLQonly07", "results": []}]} | Jeska/BertjeWDialDataALLQonly07 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALLQonly07
=========================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1135
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 18.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 18.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 18.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 18.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly09
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2439 | 1.0 | 871 | 2.1102 |
| 2.1235 | 2.0 | 1742 | 2.0785 |
| 2.0709 | 3.0 | 2613 | 2.0689 |
| 2.0033 | 4.0 | 3484 | 2.0565 |
| 1.9386 | 5.0 | 4355 | 2.0290 |
| 1.8909 | 6.0 | 5226 | 2.0366 |
| 1.8449 | 7.0 | 6097 | 1.9809 |
| 1.8078 | 8.0 | 6968 | 2.0177 |
| 1.7709 | 9.0 | 7839 | 2.0289 |
| 1.7516 | 10.0 | 8710 | 1.9645 |
| 1.7354 | 11.0 | 9581 | 1.9810 |
| 1.7073 | 12.0 | 10452 | 1.9631 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALLQonly09", "results": []}]} | Jeska/BertjeWDialDataALLQonly09 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALLQonly09
=========================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9043
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 12.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataALLQonly128
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2326 | 1.0 | 871 | 2.1509 |
| 2.1375 | 2.0 | 1742 | 2.1089 |
| 2.0442 | 3.0 | 2613 | 2.0655 |
| 2.0116 | 4.0 | 3484 | 2.0433 |
| 1.9346 | 5.0 | 4355 | 2.0134 |
| 1.9056 | 6.0 | 5226 | 1.9956 |
| 1.8295 | 7.0 | 6097 | 2.0287 |
| 1.8204 | 8.0 | 6968 | 2.0173 |
| 1.7928 | 9.0 | 7839 | 2.0251 |
| 1.7357 | 10.0 | 8710 | 2.0148 |
| 1.7318 | 11.0 | 9581 | 1.9274 |
| 1.7311 | 12.0 | 10452 | 1.9314 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataALLQonly128", "results": []}]} | Jeska/BertjeWDialDataALLQonly128 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataALLQonly128
==========================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0364
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 12.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 12.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertjeWDialDataQA20k
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1713 | 1.0 | 1542 | 2.0098 |
| 2.0736 | 2.0 | 3084 | 1.9853 |
| 2.0543 | 3.0 | 4626 | 2.0134 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "BertjeWDialDataQA20k", "results": []}]} | Jeska/BertjeWDialDataQA20k | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| BertjeWDialDataQA20k
====================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9208
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
126,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6223
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.4666 | 1.0 | 1320 | 2.3355 | 0.5768 |
| 1.5293 | 2.0 | 2640 | 1.1118 | 0.8144 |
| 0.8031 | 3.0 | 3960 | 0.6362 | 0.8803 |
| 0.2985 | 4.0 | 5280 | 0.5119 | 0.8958 |
| 0.1284 | 5.0 | 6600 | 0.5023 | 0.8931 |
| 0.0842 | 6.0 | 7920 | 0.5246 | 0.9022 |
| 0.0414 | 7.0 | 9240 | 0.5581 | 0.9013 |
| 0.0372 | 8.0 | 10560 | 0.5721 | 0.9004 |
| 0.0292 | 9.0 | 11880 | 0.5469 | 0.9141 |
| 0.0257 | 10.0 | 13200 | 0.5871 | 0.9059 |
| 0.0189 | 11.0 | 14520 | 0.6181 | 0.9049 |
| 0.0104 | 12.0 | 15840 | 0.6184 | 0.9068 |
| 0.009 | 13.0 | 17160 | 0.6013 | 0.9049 |
| 0.0051 | 14.0 | 18480 | 0.6205 | 0.9059 |
| 0.0035 | 15.0 | 19800 | 0.6223 | 0.9068 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "VaccinChatSentenceClassifierDutch_fromBERTje", "results": []}]} | Jeska/VaccinChatSentenceClassifierDutch_fromBERTje | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| VaccinChatSentenceClassifierDutch\_fromBERTje
=============================================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6223
* Accuracy: 0.9068
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
103,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje2
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5112
- Accuracy: 0.9004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.1505 | 1.0 | 1320 | 3.3293 | 0.3793 |
| 2.7333 | 2.0 | 2640 | 2.2295 | 0.6133 |
| 2.0189 | 3.0 | 3960 | 1.5134 | 0.7587 |
| 1.2504 | 4.0 | 5280 | 1.0765 | 0.8282 |
| 0.7733 | 5.0 | 6600 | 0.7937 | 0.8629 |
| 0.5217 | 6.0 | 7920 | 0.6438 | 0.8784 |
| 0.3148 | 7.0 | 9240 | 0.5733 | 0.8857 |
| 0.2067 | 8.0 | 10560 | 0.5362 | 0.8912 |
| 0.1507 | 9.0 | 11880 | 0.5098 | 0.8903 |
| 0.1024 | 10.0 | 13200 | 0.5078 | 0.8976 |
| 0.0837 | 11.0 | 14520 | 0.5054 | 0.8967 |
| 0.0608 | 12.0 | 15840 | 0.5062 | 0.8958 |
| 0.0426 | 13.0 | 17160 | 0.5072 | 0.9013 |
| 0.0374 | 14.0 | 18480 | 0.5110 | 0.9040 |
| 0.0346 | 15.0 | 19800 | 0.5112 | 0.9004 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "VaccinChatSentenceClassifierDutch_fromBERTje2", "results": []}]} | Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| VaccinChatSentenceClassifierDutch\_fromBERTje2
==============================================
This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5112
* Accuracy: 0.9004
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
* lr\_scheduler\_type: linear
* num\_epochs: 15.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
103,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog
This model is a fine-tuned version of [outputDA/checkpoint-7710](https://huggingface.co/outputDA/checkpoint-7710) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5025
- Accuracy: 0.9077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.9925 | 1.0 | 1320 | 3.0954 | 0.4223 |
| 2.5041 | 2.0 | 2640 | 1.9762 | 0.6563 |
| 1.8061 | 3.0 | 3960 | 1.3196 | 0.7952 |
| 1.0694 | 4.0 | 5280 | 0.9304 | 0.8510 |
| 0.6479 | 5.0 | 6600 | 0.6875 | 0.8821 |
| 0.4408 | 6.0 | 7920 | 0.5692 | 0.8976 |
| 0.2542 | 7.0 | 9240 | 0.5291 | 0.8949 |
| 0.1709 | 8.0 | 10560 | 0.5038 | 0.9059 |
| 0.1181 | 9.0 | 11880 | 0.4885 | 0.9049 |
| 0.0878 | 10.0 | 13200 | 0.4900 | 0.9049 |
| 0.0702 | 11.0 | 14520 | 0.4930 | 0.9086 |
| 0.0528 | 12.0 | 15840 | 0.4987 | 0.9113 |
| 0.0406 | 13.0 | 17160 | 0.5009 | 0.9113 |
| 0.0321 | 14.0 | 18480 | 0.5017 | 0.9104 |
| 0.0308 | 15.0 | 19800 | 0.5025 | 0.9077 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog", "results": []}]} | Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| VaccinChatSentenceClassifierDutch\_fromBERTje2\_DAdialog
========================================================
This model is a fine-tuned version of outputDA/checkpoint-7710 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5025
* Accuracy: 0.9077
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
* lr\_scheduler\_type: linear
* num\_epochs: 15.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
103,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly
This model is a fine-tuned version of [outputDAQonly/checkpoint-8710](https://huggingface.co/outputDAQonly/checkpoint-8710) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5008
- Accuracy: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.0751 | 1.0 | 1320 | 3.1674 | 0.4086 |
| 2.5619 | 2.0 | 2640 | 2.0335 | 0.6426 |
| 1.8549 | 3.0 | 3960 | 1.3537 | 0.7861 |
| 1.106 | 4.0 | 5280 | 0.9515 | 0.8519 |
| 0.6698 | 5.0 | 6600 | 0.7152 | 0.8757 |
| 0.4497 | 6.0 | 7920 | 0.5838 | 0.8921 |
| 0.2626 | 7.0 | 9240 | 0.5300 | 0.8940 |
| 0.1762 | 8.0 | 10560 | 0.4984 | 0.8958 |
| 0.119 | 9.0 | 11880 | 0.4906 | 0.9059 |
| 0.0919 | 10.0 | 13200 | 0.4896 | 0.8995 |
| 0.0722 | 11.0 | 14520 | 0.5012 | 0.9022 |
| 0.0517 | 12.0 | 15840 | 0.4951 | 0.9040 |
| 0.0353 | 13.0 | 17160 | 0.4988 | 0.9040 |
| 0.0334 | 14.0 | 18480 | 0.5035 | 0.9049 |
| 0.0304 | 15.0 | 19800 | 0.5008 | 0.9068 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly", "results": []}]} | Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| VaccinChatSentenceClassifierDutch\_fromBERTje2\_DAdialogQonly
=============================================================
This model is a fine-tuned version of outputDAQonly/checkpoint-8710 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5008
* Accuracy: 0.9068
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
* lr\_scheduler\_type: linear
* num\_epochs: 15.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
103,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly09
This model is a fine-tuned version of [outputDAQonly09/](https://huggingface.co/outputDAQonly09/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4978
- Accuracy: 0.9031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 330 | 3.9692 | 0.2249 |
| 4.3672 | 2.0 | 660 | 3.1312 | 0.4031 |
| 4.3672 | 3.0 | 990 | 2.5068 | 0.5658 |
| 3.1495 | 4.0 | 1320 | 2.0300 | 0.6600 |
| 2.2491 | 5.0 | 1650 | 1.6517 | 0.7450 |
| 2.2491 | 6.0 | 1980 | 1.3604 | 0.7943 |
| 1.622 | 7.0 | 2310 | 1.1328 | 0.8327 |
| 1.1252 | 8.0 | 2640 | 0.9484 | 0.8611 |
| 1.1252 | 9.0 | 2970 | 0.8212 | 0.8757 |
| 0.7969 | 10.0 | 3300 | 0.7243 | 0.8830 |
| 0.5348 | 11.0 | 3630 | 0.6597 | 0.8867 |
| 0.5348 | 12.0 | 3960 | 0.5983 | 0.8857 |
| 0.3744 | 13.0 | 4290 | 0.5635 | 0.8976 |
| 0.2564 | 14.0 | 4620 | 0.5437 | 0.8985 |
| 0.2564 | 15.0 | 4950 | 0.5124 | 0.9013 |
| 0.1862 | 16.0 | 5280 | 0.5074 | 0.9022 |
| 0.1349 | 17.0 | 5610 | 0.5028 | 0.9049 |
| 0.1349 | 18.0 | 5940 | 0.4876 | 0.9077 |
| 0.0979 | 19.0 | 6270 | 0.4971 | 0.9049 |
| 0.0763 | 20.0 | 6600 | 0.4941 | 0.9022 |
| 0.0763 | 21.0 | 6930 | 0.4957 | 0.9049 |
| 0.0602 | 22.0 | 7260 | 0.4989 | 0.9049 |
| 0.0504 | 23.0 | 7590 | 0.4959 | 0.9040 |
| 0.0504 | 24.0 | 7920 | 0.4944 | 0.9031 |
| 0.0422 | 25.0 | 8250 | 0.4985 | 0.9040 |
| 0.0379 | 26.0 | 8580 | 0.4970 | 0.9049 |
| 0.0379 | 27.0 | 8910 | 0.4949 | 0.9040 |
| 0.0351 | 28.0 | 9240 | 0.4971 | 0.9040 |
| 0.0321 | 29.0 | 9570 | 0.4967 | 0.9031 |
| 0.0321 | 30.0 | 9900 | 0.4978 | 0.9031 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly09", "results": []}]} | Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialogQonly09 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| VaccinChatSentenceClassifierDutch\_fromBERTje2\_DAdialogQonly09
===============================================================
This model is a fine-tuned version of outputDAQonly09/ on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4978
* Accuracy: 0.9031
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
103,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# VaccinChatSentenceClassifierDutch_fromBERTjeDIAL
This model is a fine-tuned version of [Jeska/BertjeWDialDataQA20k](https://huggingface.co/Jeska/BertjeWDialDataQA20k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8355
- Accuracy: 0.6322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4418 | 1.0 | 1457 | 2.3866 | 0.5406 |
| 1.7742 | 2.0 | 2914 | 1.9365 | 0.6069 |
| 1.1313 | 3.0 | 4371 | 1.8355 | 0.6322 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "VaccinChatSentenceClassifierDutch_fromBERTjeDIAL", "results": []}]} | Jeska/VaccinChatSentenceClassifierDutch_fromBERTjeDIAL | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| VaccinChatSentenceClassifierDutch\_fromBERTjeDIAL
=================================================
This model is a fine-tuned version of Jeska/BertjeWDialDataQA20k on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8355
* Accuracy: 0.6322
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.13.0.dev0
* Pytorch 1.10.0
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
37,
103,
5,
43
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0### Training results### Framework versions\n\n\n* Transformers 4.13.0.dev0\n* Pytorch 1.10.0\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 22144706
- CO2 Emissions (in grams): 27.135492487925884
## Validation Metrics
- Loss: 1.81697416305542
- Accuracy: 0.6377269139700079
- Macro F1: 0.5181293370145044
- Micro F1: 0.6377269139700079
- Weighted F1: 0.631117826235572
- Macro Precision: 0.5371452512845428
- Micro Precision: 0.6377269139700079
- Weighted Precision: 0.6655055695465463
- Macro Recall: 0.5609328178925124
- Micro Recall: 0.6377269139700079
- Weighted Recall: 0.6377269139700079
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Jeska/autonlp-vaccinfaq-22144706
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Jeska/autonlp-vaccinfaq-22144706", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Jeska/autonlp-vaccinfaq-22144706", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "unk", "tags": "autonlp", "datasets": ["Jeska/autonlp-data-vaccinfaq"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 27.135492487925884} | Jeska/autonlp-vaccinfaq-22144706 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:Jeska/autonlp-data-vaccinfaq",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"unk"
] | TAGS
#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-Jeska/autonlp-data-vaccinfaq #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 22144706
- CO2 Emissions (in grams): 27.135492487925884
## Validation Metrics
- Loss: 1.81697416305542
- Accuracy: 0.6377269139700079
- Macro F1: 0.5181293370145044
- Micro F1: 0.6377269139700079
- Weighted F1: 0.631117826235572
- Macro Precision: 0.5371452512845428
- Micro Precision: 0.6377269139700079
- Weighted Precision: 0.6655055695465463
- Macro Recall: 0.5609328178925124
- Micro Recall: 0.6377269139700079
- Weighted Recall: 0.6377269139700079
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 22144706\n- CO2 Emissions (in grams): 27.135492487925884",
"## Validation Metrics\n\n- Loss: 1.81697416305542\n- Accuracy: 0.6377269139700079\n- Macro F1: 0.5181293370145044\n- Micro F1: 0.6377269139700079\n- Weighted F1: 0.631117826235572\n- Macro Precision: 0.5371452512845428\n- Micro Precision: 0.6377269139700079\n- Weighted Precision: 0.6655055695465463\n- Macro Recall: 0.5609328178925124\n- Micro Recall: 0.6377269139700079\n- Weighted Recall: 0.6377269139700079",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-Jeska/autonlp-data-vaccinfaq #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 22144706\n- CO2 Emissions (in grams): 27.135492487925884",
"## Validation Metrics\n\n- Loss: 1.81697416305542\n- Accuracy: 0.6377269139700079\n- Macro F1: 0.5181293370145044\n- Micro F1: 0.6377269139700079\n- Weighted F1: 0.631117826235572\n- Macro Precision: 0.5371452512845428\n- Micro Precision: 0.6377269139700079\n- Weighted Precision: 0.6655055695465463\n- Macro Recall: 0.5609328178925124\n- Micro Recall: 0.6377269139700079\n- Weighted Recall: 0.6377269139700079",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
61,
44,
180,
16
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-Jeska/autonlp-data-vaccinfaq #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 22144706\n- CO2 Emissions (in grams): 27.135492487925884## Validation Metrics\n\n- Loss: 1.81697416305542\n- Accuracy: 0.6377269139700079\n- Macro F1: 0.5181293370145044\n- Micro F1: 0.6377269139700079\n- Weighted F1: 0.631117826235572\n- Macro Precision: 0.5371452512845428\n- Micro Precision: 0.6377269139700079\n- Weighted Precision: 0.6655055695465463\n- Macro Recall: 0.5609328178925124\n- Micro Recall: 0.6377269139700079\n- Weighted Recall: 0.6377269139700079## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
null | null | `LOREN` is an interpretable fact verification model trained on [FEVER](https://fever.ai), which aims to predict the veracity of a textual claim against a trustworthy knowledge source such as Wikipedia.
`LOREN` also decomposes the verification and makes accurate and faithful phrase-level veracity predictions without any phrasal veracity supervision.
This repo hosts the following pre-trained models for `LOREN`:
- `fact_checking/`: the verification models based on BERT (large) and RoBERTa (large), respectively.
- `mrc_seq2seq/`: the generative machine reading comprehension model based on BART (base).
- `evidence_retrieval/`: the evidence sentence ranking models, which are copied directly from [KGAT](https://github.com/thunlp/KernelGAT).
More technical details can be found at [this GitHub Repo](https://github.com/jiangjiechen/LOREN).
Please check out our AAAI 2022 paper for more details: "[LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification](https://arxiv.org/abs/2012.13577)". | {} | jiangjiechen/loren | null | [
"arxiv:2012.13577",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2012.13577"
] | [] | TAGS
#arxiv-2012.13577 #region-us
| 'LOREN' is an interpretable fact verification model trained on FEVER, which aims to predict the veracity of a textual claim against a trustworthy knowledge source such as Wikipedia.
'LOREN' also decomposes the verification and makes accurate and faithful phrase-level veracity predictions without any phrasal veracity supervision.
This repo hosts the following pre-trained models for 'LOREN':
- 'fact_checking/': the verification models based on BERT (large) and RoBERTa (large), respectively.
- 'mrc_seq2seq/': the generative machine reading comprehension model based on BART (base).
- 'evidence_retrieval/': the evidence sentence ranking models, which are copied directly from KGAT.
More technical details can be found at this GitHub Repo.
Please check out our AAAI 2022 paper for more details: "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification". | [] | [
"TAGS\n#arxiv-2012.13577 #region-us \n"
] | [
15
] | [
"TAGS\n#arxiv-2012.13577 #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-nli
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1357
- Accuracy: 0.756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 196 | 0.7357 | 0.156 |
| No log | 2.0 | 392 | 0.5952 | 0.0993 |
| 0.543 | 3.0 | 588 | 0.5630 | 0.099 |
| 0.543 | 4.0 | 784 | 0.5670 | 0.079 |
| 0.543 | 5.0 | 980 | 0.5795 | 0.078 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["klue"], "metrics": ["accuracy"], "model_index": [{"name": "bert-base-finetuned-nli", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "klue", "type": "klue", "args": "nli"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.756}}]}]} | Jihyun22/bert-base-finetuned-nli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:klue",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #generated_from_trainer #dataset-klue #autotrain_compatible #endpoints_compatible #region-us
| bert-base-finetuned-nli
=======================
This model is a fine-tuned version of klue/bert-base on the klue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1357
* Accuracy: 0.756
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #dataset-klue #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] | [
41,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #dataset-klue #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6644
- Accuracy: 0.6814
- F1: 0.8105
- Combined Score: 0.7459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "testing", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE MRPC", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.6813725490196079, "name": "Accuracy"}, {"type": "f1", "value": 0.8104956268221574, "name": "F1"}]}]}]} | LysandreJik/testing | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# testing
This model is a fine-tuned version of distilbert-base-uncased on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6644
- Accuracy: 0.6814
- F1: 0.8105
- Combined Score: 0.7459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.11.0
- Tokenizers 0.10.3
| [
"# testing\n\nThis model is a fine-tuned version of distilbert-base-uncased on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6644\n- Accuracy: 0.6814\n- F1: 0.8105\n- Combined Score: 0.7459",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.0.dev0\n- Pytorch 1.9.0+cu111\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# testing\n\nThis model is a fine-tuned version of distilbert-base-uncased on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6644\n- Accuracy: 0.6814\n- F1: 0.8105\n- Combined Score: 0.7459",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.0.dev0\n- Pytorch 1.9.0+cu111\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] | [
58,
68,
7,
9,
9,
4,
91,
5,
47
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #en #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n# testing\n\nThis model is a fine-tuned version of distilbert-base-uncased on the GLUE MRPC dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6644\n- Accuracy: 0.6814\n- F1: 0.8105\n- Combined Score: 0.7459## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10### Training results### Framework versions\n\n- Transformers 4.11.0.dev0\n- Pytorch 1.9.0+cu111\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] |
text-generation | transformers |
# Jimmy's character DialoGPT model | {"tags": ["conversational"]} | JimmyHodl/DialoGPT-medium | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jimmy's character DialoGPT model | [
"# Jimmy's character DialoGPT model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jimmy's character DialoGPT model"
] | [
39,
9
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Jimmy's character DialoGPT model"
] |
null | transformers |
# KrELECTRA-base-mecab
Korean-based Pre-trained ELECTRA Language Model using Mecab (Morphological Analyzer)
## Usage
### Load model and tokenizer
```python
>>> from transformers import AutoTokenizer, AutoModelForPreTraining
>>> model = AutoModelForPreTraining.from_pretrained("Jinhwan/krelectra-base-mecab")
>>> tokenizer = AutoTokenizer.from_pretrained("Jinhwan/krelectra-base-mecab")
```
### Tokenizer example
```python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("Jinhwan/krelectra-base-mecab")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]'])
[2, 7214, 24023, 24663, 26580, 3195, 7086, 3746, 5500, 17, 3]
| {"language": "ko", "license": "apache-2.0", "tags": ["korean"]} | Jinhwan/krelectra-base-mecab | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"korean",
"ko",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ko"
] | TAGS
#transformers #pytorch #electra #pretraining #korean #ko #license-apache-2.0 #endpoints_compatible #region-us
|
# KrELECTRA-base-mecab
Korean-based Pre-trained ELECTRA Language Model using Mecab (Morphological Analyzer)
## Usage
### Load model and tokenizer
### Tokenizer example
'''python
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("Jinhwan/krelectra-base-mecab")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]'])
[2, 7214, 24023, 24663, 26580, 3195, 7086, 3746, 5500, 17, 3]
| [
"# KrELECTRA-base-mecab\nKorean-based Pre-trained ELECTRA Language Model using Mecab (Morphological Analyzer)",
"## Usage",
"### Load model and tokenizer",
"### Tokenizer example\n\n'''python\n>>> from transformers import AutoTokenizer\n>>> tokenizer = AutoTokenizer.from_pretrained(\"Jinhwan/krelectra-base-mecab\")\n>>> tokenizer.tokenize(\"[CLS] 한국어 ELECTRA를 공유합니다. [SEP]\")\n['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]']\n>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]'])\n[2, 7214, 24023, 24663, 26580, 3195, 7086, 3746, 5500, 17, 3]"
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #korean #ko #license-apache-2.0 #endpoints_compatible #region-us \n",
"# KrELECTRA-base-mecab\nKorean-based Pre-trained ELECTRA Language Model using Mecab (Morphological Analyzer)",
"## Usage",
"### Load model and tokenizer",
"### Tokenizer example\n\n'''python\n>>> from transformers import AutoTokenizer\n>>> tokenizer = AutoTokenizer.from_pretrained(\"Jinhwan/krelectra-base-mecab\")\n>>> tokenizer.tokenize(\"[CLS] 한국어 ELECTRA를 공유합니다. [SEP]\")\n['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]']\n>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]'])\n[2, 7214, 24023, 24663, 26580, 3195, 7086, 3746, 5500, 17, 3]"
] | [
36,
30,
3,
8,
291
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #korean #ko #license-apache-2.0 #endpoints_compatible #region-us \n# KrELECTRA-base-mecab\nKorean-based Pre-trained ELECTRA Language Model using Mecab (Morphological Analyzer)## Usage### Load model and tokenizer### Tokenizer example\n\n'''python\n>>> from transformers import AutoTokenizer\n>>> tokenizer = AutoTokenizer.from_pretrained(\"Jinhwan/krelectra-base-mecab\")\n>>> tokenizer.tokenize(\"[CLS] 한국어 ELECTRA를 공유합니다. [SEP]\")\n['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]']\n>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##ECT', '##RA', '##를', '공유', '##합', '##니다', '.', '[SEP]'])\n[2, 7214, 24023, 24663, 26580, 3195, 7086, 3746, 5500, 17, 3]"
] |
null | null | for test | {"license": "afl-3.0"} | Jira/first_test | null | [
"license:afl-3.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#license-afl-3.0 #region-us
| for test | [] | [
"TAGS\n#license-afl-3.0 #region-us \n"
] | [
13
] | [
"TAGS\n#license-afl-3.0 #region-us \n"
] |
zero-shot-classification | transformers |
# XLM-roBERTa-large-it-mnli
## Version 0.1
| | matched-it acc | mismatched-it acc |
| -------------------------------------------------------------------------------- |----------------|-------------------|
| XLM-roBERTa-large-it-mnli | 84.75 | 85.39 |
## Model Description
This model takes [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tunes it on a subset of NLI data taken from a automatically translated version of the MNLI corpus. It is intended to be used for zero-shot text classification, such as with the Hugging Face [ZeroShotClassificationPipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline).
## Intended Usage
This model is intended to be used for zero-shot text classification of italian texts.
Since the base model was pre-trained trained on 100 different languages, the
model has shown some effectiveness in languages beyond those listed above as
well. See the full list of pre-trained languages in appendix A of the
[XLM Roberata paper](https://arxiv.org/abs/1911.02116)
For English-only classification, it is recommended to use
[bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) or
[a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
#### With the zero-shot classification pipeline
The model can be loaded with the `zero-shot-classification` pipeline like so:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Jiva/xlm-roberta-large-it-mnli", device=0, use_fast=True, multi_label=True)
```
You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
classify in another:
```python
# we will classify the following wikipedia entry about Sardinia"
sequence_to_classify = "La Sardegna è una regione italiana a statuto speciale di 1 592 730 abitanti con capoluogo Cagliari, la cui denominazione bilingue utilizzata nella comunicazione ufficiale è Regione Autonoma della Sardegna / Regione Autònoma de Sardigna."
# we can specify candidate labels in Italian:
candidate_labels = ["geografia", "politica", "macchine", "cibo", "moda"]
classifier(sequence_to_classify, candidate_labels)
# {'labels': ['geografia', 'moda', 'politica', 'macchine', 'cibo'],
# 'scores': [0.38871392607688904, 0.22633370757102966, 0.19398456811904907, 0.13735772669315338, 0.13708525896072388]}
```
The default hypothesis template is the English, `This text is {}`. With this model better results are achieving when providing a translated template:
```python
sequence_to_classify = "La Sardegna è una regione italiana a statuto speciale di 1 592 730 abitanti con capoluogo Cagliari, la cui denominazione bilingue utilizzata nella comunicazione ufficiale è Regione Autonoma della Sardegna / Regione Autònoma de Sardigna."
candidate_labels = ["geografia", "politica", "macchine", "cibo", "moda"]
hypothesis_template = "si parla di {}"
# classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template)
# 'scores': [0.6068345904350281, 0.34715887904167175, 0.32433947920799255, 0.3068877160549164, 0.18744681775569916]}
```
#### With manual PyTorch
```python
# pose sequence as a NLI premise and label as a hypothesis
from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained('Jiva/xlm-roberta-large-it-mnli')
tokenizer = AutoTokenizer.from_pretrained('Jiva/xlm-roberta-large-it-mnli')
premise = sequence
hypothesis = f'si parla di {}.'
# run through model pre-trained on MNLI
x = tokenizer.encode(premise, hypothesis, return_tensors='pt',
truncation_strategy='only_first')
logits = nli_model(x.to(device))[0]
# we throw away "neutral" (dim 1) and take the probability of
# "entailment" (2) as the probability of the label being true
entail_contradiction_logits = logits[:,[0,2]]
probs = entail_contradiction_logits.softmax(dim=1)
prob_label_is_true = probs[:,1]
```
## Training
## Version 0.1
The model has been now retrained on the full training set. Around 1000 sentences pairs have been removed from the set because their translation was botched by the translation model.
| metric | value |
|----------------- |------- |
| learning_rate | 4e-6 |
| optimizer | AdamW |
| batch_size | 80 |
| mcc | 0.77 |
| train_loss | 0.34 |
| eval_loss | 0.40 |
| stopped_at_step | 9754 |
## Version 0.0
This model was pre-trained on set of 100 languages, as described in
[the original paper](https://arxiv.org/abs/1911.02116). It was then fine-tuned on the task of NLI on an Italian translation of the MNLI dataset (85% of the train set only so far). The model used for translating the texts is Helsinki-NLP/opus-mt-en-it, with a max output sequence lenght of 120. The model has been trained for 1 epoch with learning rate 4e-6 and batch size 80, currently it scores 82 acc. on the remaining 15% of the training. | {"language": "it", "license": "mit", "tags": ["text-classification", "pytorch", "tensorflow"], "datasets": ["multi_nli", "glue"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "La seconda guerra mondiale vide contrapporsi, tra il 1939 e il 1945, le cosiddette potenze dell'Asse e gli Alleati che, come gi\u00e0 accaduto ai belligeranti della prima guerra mondiale, si combatterono su gran parte del pianeta; il conflitto ebbe inizio il 1\u00ba settembre 1939 con l'attacco della Germania nazista alla Polonia e termin\u00f2, nel teatro europeo, l'8 maggio 1945 con la resa tedesca e, in quello asiatico, il successivo 2 settembre con la resa dell'Impero giapponese dopo i bombardamenti atomici di Hiroshima e Nagasaki.", "candidate_labels": "guerra, storia, moda, cibo", "multi_class": true}], "model-index": [{"name": "Jiva/xlm-roberta-large-it-mnli", "results": [{"task": {"type": "natural-language-inference", "name": "Natural Language Inference"}, "dataset": {"name": "glue", "type": "glue", "config": "mnli", "split": "validation_matched"}, "metrics": [{"type": "accuracy", "value": 0.8819154355578197, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjY3MTgxNjg2ZGZmYjRjNmUyYWMwYzA3M2I3M2U0ZTYxZTFlNWY0Y2Y3MjZhYmVmM2U0OTZlYmJiMzU0MWRiMiIsInZlcnNpb24iOjF9.jgND_l7mc3EtHPiAPbAas7YaNnNZ5FSZNmIDOHSEpqV87lGL2XL4seol_MspagWmoQAN_RGdSM9nsIQH364EAw"}, {"type": "precision", "value": 0.8814638070461666, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGY0MjQ0ZDkyMzA3NmU2YmYzMGUyNTJmNWUxMTI4MTI5YzhiNjA2MzZiZDBmMTc4ODdhMzcxNTMyM2Y0MWIwOCIsInZlcnNpb24iOjF9.BCDxzHFaXZWISV2qkXimdnIxGT3qVos-tcBv3Yp9VntL2ot4e-Nifman-Yb4XwmHccTxBnf3TY0DxEE55vF9BQ"}, {"type": "precision", "value": 0.8819154355578197, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTlkZWIzNTBhNmFkNzkwNzg3ODcxNmU3YjgwODBmMmE5Njc3M2RmMDk0ZGFjZWYwMDBmNzVjOTQ3NGYyZjI3ZSIsInZlcnNpb24iOjF9.ejVcvVSUBWSMbvpxlkVi73qzkwNBgD5C1GBTandyWbk3bOas7fJ26x0duI6sNkgz-Y3Q_3pI-LJSCZgtPhP0Bw"}, {"type": "precision", "value": 0.881571663280083, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDFkMWI2MTIwNjRmYjgxYjZiNWJmZWZmNzAxNDcwODdjYzg2MTAwM2I5YWRjYWQ0MzA5MTk5MTFlZDI5NGQ4MiIsInZlcnNpb24iOjF9.GrHhqY6L8AJEy0XaNzR2QI2nnwJUen8Ay5sKVh0gBN3jAv-DWwNrjVZgeclGgH4pOdRxxlNCOkZyPnEEon4eAA"}, {"type": "recall", "value": 0.8802419956104793, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjFhNjA2M2IxZGQwYjE3YzIzZGRkMTM1MDg5OTBiNTY3YjE1YjE0ZDlkNmI1ZmY5ZmM5OTZkOTk2ODI3Mzc3YiIsInZlcnNpb24iOjF9.yWoQSRCGGu6mNhjak6fPM-w01kAlDK8lDVdlKserf19gEeiB4vyPfklrp4HdlRFadfUB7pJ2iloTCkDj_jPYBA"}, {"type": "recall", "value": 0.8819154355578197, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjQ1N2FhNmRiMWY5YmIwODgzNjI2YjY2NzgwNmQ2ZDRmN2UzNTg3MWQ0NDhmMjMzNjc2NGExMjliNWYxMDRjZSIsInZlcnNpb24iOjF9.XGiAwKlPkFwimVDK2CJ37oi8mz2KkJMxAanTJNFcW_Lwa-9T9--yZNtS3t1pfbUP2NeXxCzW_8DlxnM7RcG2DA"}, {"type": "recall", "value": 0.8819154355578197, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDU1OWFjN2ZmYjVlNWJjZTVmZDQ0MmVjZmFkMmU2OTkzZTcxZDkyZTlmN2E0NjFkOTE4YzU1ZjVjYWMxYjViYSIsInZlcnNpb24iOjF9.HpRWd_-NXIgZemTCIcpK2lpe4bt2fro_NgWX2wuvN4uWVdKsYKr9v5W8EOEv4xWzdbgtlllCG9UCc3-7YqBAAg"}, {"type": "f1", "value": 0.8802937937959167, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2U1OGNmZDMxZTUwNDgxZjIzYWM2ZGQzZTg1NmNjMjdjNTkxNTk0MGI2ZDlkYjVmODFiZTllZmE0NzZlZWVlOCIsInZlcnNpb24iOjF9.7NupnTf-kIv0pIoof-2XHp7ESavQeTDDRGs3bTF3F0UJsorY8WO7I_qyoGiuPmLWtwFsNJjybQdMahM_oss7Ag"}, {"type": "f1", "value": 0.8819154355578197, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODA2MGU2MzM5OWRjMTk4OGYxNTIxMjUyNWI0YjU5ZWRlMDZhMWRjMjk1MmQzZDg0YTYzYzY4M2U3OWFhNzEwNiIsInZlcnNpb24iOjF9.dIYUojg4cbbQCP6rlp2tbX72tMR5ROtUZYFDJBgHD8_KfEAr9nNoLeP2cvFCYcFe8MyQh7LADTK5l0PTt3B0AQ"}, {"type": "f1", "value": 0.8811955957302677, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2I2ZDQ4NWY5NmNmZjNjOWRjNGUyYzcyZWNjNzA0MGJlZmRkYWIwNjVmYmFlNjRmMjAwMWIwMTJjNDY1MjYxNyIsInZlcnNpb24iOjF9.LII2Vu8rWWbjWU55Yenf4ZsSpReiPsoBmHH1XwgVu7HgTtL-TnRaCCxSTJ0i0jnK8sa2kKqXw1RndE1HL1GbBQ"}, {"type": "loss", "value": 0.3171548545360565, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGYxNDA4YzBjMGU5MDBjNGQwOThlMzZkNWFjNDg4MzdiNWFiNGM2ZmQyOTZmNTBkMTE1OGI1NzhmMGM3ZWJjYSIsInZlcnNpb24iOjF9._yP8hC7siIQkSG8-R9RLlIYqqyh8sobk-jN1-QELU0iv9VS54df_7nNPy8hGUVx-TAntaIeFyQ8DLVcM_vVDDw"}]}]}]} | Jiva/xlm-roberta-large-it-mnli | null | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"tensorflow",
"zero-shot-classification",
"it",
"dataset:multi_nli",
"dataset:glue",
"arxiv:1911.02116",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1911.02116"
] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #xlm-roberta #text-classification #tensorflow #zero-shot-classification #it #dataset-multi_nli #dataset-glue #arxiv-1911.02116 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
| XLM-roBERTa-large-it-mnli
=========================
Version 0.1
-----------
matched-it acc: XLM-roBERTa-large-it-mnli, mismatched-it acc: 84.75
Model Description
-----------------
This model takes xlm-roberta-large and fine-tunes it on a subset of NLI data taken from a automatically translated version of the MNLI corpus. It is intended to be used for zero-shot text classification, such as with the Hugging Face ZeroShotClassificationPipeline.
Intended Usage
--------------
This model is intended to be used for zero-shot text classification of italian texts.
Since the base model was pre-trained trained on 100 different languages, the
model has shown some effectiveness in languages beyond those listed above as
well. See the full list of pre-trained languages in appendix A of the
XLM Roberata paper
For English-only classification, it is recommended to use
bart-large-mnli or
a distilled bart MNLI model.
#### With the zero-shot classification pipeline
The model can be loaded with the 'zero-shot-classification' pipeline like so:
You can then classify in any of the above languages. You can even pass the labels in one language and the sequence to
classify in another:
The default hypothesis template is the English, 'This text is {}'. With this model better results are achieving when providing a translated template:
#### With manual PyTorch
Training
--------
Version 0.1
-----------
The model has been now retrained on the full training set. Around 1000 sentences pairs have been removed from the set because their translation was botched by the translation model.
Version 0.0
-----------
This model was pre-trained on set of 100 languages, as described in
the original paper. It was then fine-tuned on the task of NLI on an Italian translation of the MNLI dataset (85% of the train set only so far). The model used for translating the texts is Helsinki-NLP/opus-mt-en-it, with a max output sequence lenght of 120. The model has been trained for 1 epoch with learning rate 4e-6 and batch size 80, currently it scores 82 acc. on the remaining 15% of the training.
| [
"#### With the zero-shot classification pipeline\n\n\nThe model can be loaded with the 'zero-shot-classification' pipeline like so:\n\n\nYou can then classify in any of the above languages. You can even pass the labels in one language and the sequence to\nclassify in another:\n\n\nThe default hypothesis template is the English, 'This text is {}'. With this model better results are achieving when providing a translated template:",
"#### With manual PyTorch\n\n\nTraining\n--------\n\n\nVersion 0.1\n-----------\n\n\nThe model has been now retrained on the full training set. Around 1000 sentences pairs have been removed from the set because their translation was botched by the translation model.\n\n\n\nVersion 0.0\n-----------\n\n\nThis model was pre-trained on set of 100 languages, as described in\nthe original paper. It was then fine-tuned on the task of NLI on an Italian translation of the MNLI dataset (85% of the train set only so far). The model used for translating the texts is Helsinki-NLP/opus-mt-en-it, with a max output sequence lenght of 120. The model has been trained for 1 epoch with learning rate 4e-6 and batch size 80, currently it scores 82 acc. on the remaining 15% of the training."
] | [
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #tensorflow #zero-shot-classification #it #dataset-multi_nli #dataset-glue #arxiv-1911.02116 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### With the zero-shot classification pipeline\n\n\nThe model can be loaded with the 'zero-shot-classification' pipeline like so:\n\n\nYou can then classify in any of the above languages. You can even pass the labels in one language and the sequence to\nclassify in another:\n\n\nThe default hypothesis template is the English, 'This text is {}'. With this model better results are achieving when providing a translated template:",
"#### With manual PyTorch\n\n\nTraining\n--------\n\n\nVersion 0.1\n-----------\n\n\nThe model has been now retrained on the full training set. Around 1000 sentences pairs have been removed from the set because their translation was botched by the translation model.\n\n\n\nVersion 0.0\n-----------\n\n\nThis model was pre-trained on set of 100 languages, as described in\nthe original paper. It was then fine-tuned on the task of NLI on an Italian translation of the MNLI dataset (85% of the train set only so far). The model used for translating the texts is Helsinki-NLP/opus-mt-en-it, with a max output sequence lenght of 120. The model has been trained for 1 epoch with learning rate 4e-6 and batch size 80, currently it scores 82 acc. on the remaining 15% of the training."
] | [
81,
86,
203
] | [
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #tensorflow #zero-shot-classification #it #dataset-multi_nli #dataset-glue #arxiv-1911.02116 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n#### With the zero-shot classification pipeline\n\n\nThe model can be loaded with the 'zero-shot-classification' pipeline like so:\n\n\nYou can then classify in any of the above languages. You can even pass the labels in one language and the sequence to\nclassify in another:\n\n\nThe default hypothesis template is the English, 'This text is {}'. With this model better results are achieving when providing a translated template:#### With manual PyTorch\n\n\nTraining\n--------\n\n\nVersion 0.1\n-----------\n\n\nThe model has been now retrained on the full training set. Around 1000 sentences pairs have been removed from the set because their translation was botched by the translation model.\n\n\n\nVersion 0.0\n-----------\n\n\nThis model was pre-trained on set of 100 languages, as described in\nthe original paper. It was then fine-tuned on the task of NLI on an Italian translation of the MNLI dataset (85% of the train set only so far). The model used for translating the texts is Helsinki-NLP/opus-mt-en-it, with a max output sequence lenght of 120. The model has been trained for 1 epoch with learning rate 4e-6 and batch size 80, currently it scores 82 acc. on the remaining 15% of the training."
] |
text-generation | transformers |
# My Awesome Model | {"tags": ["conversational"]} | Jllama/dialoGPT-small-Joshua-test | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model | [
"# My Awesome Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] | [
39,
4
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# My Awesome Model"
] |
text-classification | transformers |
# roberta-base-bne-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9415
- Accuracy: 0.7881
<details>
## Model description
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 378 | 0.5534 | 0.7558 |
| 0.6089 | 2.0 | 756 | 0.5315 | 0.7643 |
| 0.2678 | 3.0 | 1134 | 0.7336 | 0.7816 |
| 0.0605 | 4.0 | 1512 | 0.8809 | 0.7866 |
| 0.0605 | 5.0 | 1890 | 0.9415 | 0.7881 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Junqueras, sobre la decisión judicial sobre Puigdemont: La justicia que falta en el Estado llega y llegará de Europa"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9936726093292236}]
independence_analysis(
"El desafío independentista queda adormecido, y eso que el Gobierno ha sido muy claro en que su propuesta para Cataluña es una agenda de reencuentro, centrada en inversiones e infraestructuras")
# Output:
[{'label': 'AGAINST', 'score': 0.7508948445320129}]
independence_analysis(
"Desconvocada la manifestación del domingo en Barcelona en apoyo a Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.9966907501220703}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(SPANISH).ipynb#scrollTo=uNMOXJz38W6U)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) | {"language": "es", "license": "apache-2.0", "tags": ["spanish"], "datasets": ["catalonia_independence"], "metrics": ["accuracy"], "widget": [{"text": "Junqueras, sobre la decisi\u00f3n judicial sobre Puigdemont: La justicia que falta en el Estado llega y llegar\u00e1 de Europa"}, {"text": "Desconvocada la manifestaci\u00f3n del domingo en Barcelona en apoyo a Puigdemont"}], "model-index": [{"name": "roberta-base-bne-finetuned-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "catalonia_independence", "type": "catalonia_independence", "args": "spanish"}, "metrics": [{"type": "accuracy", "value": 0.7880893300248138, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "catalonia_independence", "type": "catalonia_independence", "config": "catalan", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.4592039800995025, "name": "Accuracy", "verified": true}, {"type": "precision", "value": 0.6104489964825159, "name": "Precision Macro", "verified": true}, {"type": "precision", "value": 0.4592039800995025, "name": "Precision Micro", "verified": true}, {"type": "precision", "value": 0.6167123723406555, "name": "Precision Weighted", "verified": true}, {"type": "recall", "value": 0.4146479268294389, "name": "Recall Macro", "verified": true}, {"type": "recall", "value": 0.4592039800995025, "name": "Recall Micro", "verified": true}, {"type": "recall", "value": 0.4592039800995025, "name": "Recall Weighted", "verified": true}, {"type": "f1", "value": 0.33416407167650636, "name": "F1 Macro", "verified": true}, {"type": "f1", "value": 0.4592039800995025, "name": "F1 Micro", "verified": true}, {"type": "f1", "value": 0.34549318538357193, "name": "F1 Weighted", "verified": true}, {"type": "loss", "value": 3.393402099609375, "name": "loss", "verified": true}]}]}]} | JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"spanish",
"es",
"dataset:catalonia_independence",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #spanish #es #dataset-catalonia_independence #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
| roberta-base-bne-finetuned-catalonia-independence-detector
==========================================================
This model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the catalonia\_independence dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9415
* Accuracy: 0.7881
Model description
-----------------
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Model in action
Fast usage with pipelines:

>
> Special thx to Manuel Romero/@mrm8488 as my mentor & R.C.
>
>
>
>
> Created by Jonatan Luna | LinkedIn
>
>
>
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Model in action\n\n\nFast usage with pipelines:\n\n\n\n\n\n\n> \n> Special thx to Manuel Romero/@mrm8488 as my mentor & R.C.\n> \n> \n> \n\n\n\n> \n> Created by Jonatan Luna | LinkedIn\n> \n> \n>"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #spanish #es #dataset-catalonia_independence #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Model in action\n\n\nFast usage with pipelines:\n\n\n\n\n\n\n> \n> Special thx to Manuel Romero/@mrm8488 as my mentor & R.C.\n> \n> \n> \n\n\n\n> \n> Created by Jonatan Luna | LinkedIn\n> \n> \n>"
] | [
58,
101,
5,
22,
99
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #spanish #es #dataset-catalonia_independence #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Model in action\n\n\nFast usage with pipelines:\n\n\n\n\n\n\n> \n> Special thx to Manuel Romero/@mrm8488 as my mentor & R.C.\n> \n> \n> \n\n\n\n> \n> Created by Jonatan Luna | LinkedIn\n> \n> \n>"
] |
text-classification | transformers |
# roberta-base-bne-finetuned-ciberbullying-spanish
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect ciberbullying on Spanish.
It achieves the following results on the evaluation set:
- Loss: 0.1657
- Accuracy: 0.9607
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 360k sentences.
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.1512 | 1.0 | 22227 | 0.9501 | 0.1418 |
| 0.1253 | 2.0 | 44454 | 0.9567 | 0.1499 |
| 0.0973 | 3.0 | 66681 | 0.9594 | 0.1397 |
| 0.0658 | 4.0 | 88908 | 0.9607 | 0.1657 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-bne-finetuned-ciberbullying-spanish"
bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
bullying_analysis(
"Desde que te vi me enamoré de ti."
)
# Output:
[{'label': 'Not_bullying', 'score': 0.9995710253715515}]
bullying_analysis(
"Eres tan fea que cuando eras pequeña te echaban de comer por debajo de la puerta."
)
# Output:
[{'label': 'Bullying', 'score': 0.9918262958526611}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(SPANISH).ipynb)
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) | {"language": "es", "tags": ["spanish"], "metrics": ["accuracy"], "widget": [{"text": "Eres mas peque\u00f1o que un pitufo!"}, {"text": "Eres muy feo!"}, {"text": "Odio tu forma de hablar!"}, {"text": "Eres tan fea que cuando eras peque\u00f1a te echaban de comer por debajo de la puerta."}]} | JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"spanish",
"es",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #roberta #text-classification #spanish #es #autotrain_compatible #endpoints_compatible #has_space #region-us
| roberta-base-bne-finetuned-ciberbullying-spanish
================================================
This model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect ciberbullying on Spanish.
It achieves the following results on the evaluation set:
* Loss: 0.1657
* Accuracy: 0.9607
Training and evaluation data
----------------------------
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 360k sentences.
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Model in action
Fast usage with pipelines:
 and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Model in action\n\n\nFast usage with pipelines:\n\n\n and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Model in action\n\n\nFast usage with pipelines:\n\n\n and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4### Training results### Model in action\n\n\nFast usage with pipelines:\n\n\n on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2869
- Accuracy: 0.9012
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3222 | 1.0 | 1255 | 0.2869 | 0.9012 |
| 0.2418 | 2.0 | 2510 | 0.3125 | 0.8987 |
| 0.1726 | 3.0 | 3765 | 0.4120 | 0.8943 |
| 0.0685 | 4.0 | 5020 | 0.5239 | 0.8919 |
| 0.0245 | 5.0 | 6275 | 0.5910 | 0.8947 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-bne-finetuned-mnli", "results": []}]} | JonatanGk/roberta-base-bne-finetuned-hate-speech-offensive-spanish | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| roberta-base-bne-finetuned-mnli
===============================
This model is a fine-tuned version of BSC-TeMU/roberta-base-bne on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2869
* Accuracy: 0.9012
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
49,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-sqac
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9924 | 1.0 | 1196 | 0.8670 |
| 0.474 | 2.0 | 2392 | 0.8923 |
| 0.1637 | 3.0 | 3588 | 1.2066 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["sqac"], "model-index": [{"name": "roberta-base-bne-finetuned-sqac", "results": []}]} | JonatanGk/roberta-base-bne-finetuned-sqac | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:sqac",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-sqac #license-apache-2.0 #endpoints_compatible #region-us
| roberta-base-bne-finetuned-sqac
===============================
This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-bne on the sqac dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2066
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-sqac #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
46,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #question-answering #generated_from_trainer #dataset-sqac #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
# roberta-base-ca-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6065
- Accuracy: 0.7612
<details>
## Training and evaluation data
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 377 | 0.6311 | 0.7453 |
| 0.7393 | 2.0 | 754 | 0.6065 | 0.7612 |
| 0.5019 | 3.0 | 1131 | 0.6340 | 0.7547 |
| 0.3837 | 4.0 | 1508 | 0.6777 | 0.7597 |
| 0.3837 | 5.0 | 1885 | 0.7232 | 0.7582 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. És a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, té un sentiment excloent, només se senten catalans, i un 4% sol espanyol."
)
# Output:
[{'label': 'AGAINST', 'score': 0.7457581758499146}]
independence_analysis(
"Llarena demana la detenció de Comín i Ponsatí aprofitant que són a Itàlia amb Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.7436802983283997}]
independence_analysis(
"Puigdemont, a l'estat espanyol: Quatre anys després, ens hem guanyat el dret a dir prou"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9040119647979736}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(CATALAN).ipynb#scrollTo=j29NHJtOyAVU)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) | {"language": "ca", "license": "apache-2.0", "tags": ["catalan"], "datasets": ["catalonia_independence"], "metrics": ["accuracy"], "widget": [{"text": "Puigdemont, a l'estat espanyol: Quatre anys despr\u00e9s, ens hem guanyat el dret a dir prou"}, {"text": "Llarena demana la detenci\u00f3 de Com\u00edn i Ponsat\u00ed aprofitant que s\u00f3n a It\u00e0lia amb Puigdemont"}, {"text": "Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. \u00c9s a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, t\u00e9 un sentiment excloent, nom\u00e9s se senten catalans, i un 4% sol espanyol."}], "model-index": [{"name": "roberta-base-ca-finetuned-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "catalonia_independence", "type": "catalonia_independence", "args": "catalan"}, "metrics": [{"type": "accuracy", "value": 0.7611940298507462, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "catalonia_independence", "type": "catalonia_independence", "config": "catalan", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.7208955223880597, "name": "Accuracy", "verified": true}, {"type": "precision", "value": 0.7532458247651523, "name": "Precision Macro", "verified": true}, {"type": "precision", "value": 0.7208955223880597, "name": "Precision Micro", "verified": true}, {"type": "precision", "value": 0.7367396361532118, "name": "Precision Weighted", "verified": true}, {"type": "recall", "value": 0.6880645531209203, "name": "Recall Macro", "verified": true}, {"type": "recall", "value": 0.7208955223880597, "name": "Recall Micro", "verified": true}, {"type": "recall", "value": 0.7208955223880597, "name": "Recall Weighted", "verified": true}, {"type": "f1", "value": 0.7013044744309381, "name": "F1 Macro", "verified": true}, {"type": "f1", "value": 0.7208955223880597, "name": "F1 Micro", "verified": true}, {"type": "f1", "value": 0.713640086434487, "name": "F1 Weighted", "verified": true}, {"type": "loss", "value": 0.6895929574966431, "name": "loss", "verified": true}]}]}]} | JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"catalan",
"ca",
"dataset:catalonia_independence",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca"
] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #catalan #ca #dataset-catalonia_independence #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
| roberta-base-ca-finetuned-catalonia-independence-detector
=========================================================
This model is a fine-tuned version of BSC-TeMU/roberta-base-ca on the catalonia\_independence dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6065
* Accuracy: 0.7612
Training and evaluation data
----------------------------
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Model in action
Fast usage with pipelines:

>
> Special thx to Manuel Romero/@mrm8488 as my mentor & R.C.
>
>
>
>
> Created by Jonatan Luna | LinkedIn
>
>
>
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Model in action\n\n\nFast usage with pipelines:\n\n\n\n\n\n\n> \n> Special thx to Manuel Romero/@mrm8488 as my mentor & R.C.\n> \n> \n> \n\n\n\n> \n> Created by Jonatan Luna | LinkedIn\n> \n> \n>"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #catalan #ca #dataset-catalonia_independence #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Model in action\n\n\nFast usage with pipelines:\n\n\n\n\n\n\n> \n> Special thx to Manuel Romero/@mrm8488 as my mentor & R.C.\n> \n> \n> \n\n\n\n> \n> Created by Jonatan Luna | LinkedIn\n> \n> \n>"
] | [
58,
101,
5,
22,
99
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #catalan #ca #dataset-catalonia_independence #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Model in action\n\n\nFast usage with pipelines:\n\n\n\n\n\n\n> \n> Special thx to Manuel Romero/@mrm8488 as my mentor & R.C.\n> \n> \n> \n\n\n\n> \n> Created by Jonatan Luna | LinkedIn\n> \n> \n>"
] |
text-classification | transformers | # roberta-base-ca-finetuned-cyberbullying-catalan
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect cyberbullying on Catalan.
It achieves the following results on the evaluation set:
- Loss: 0.1508
- Accuracy: 0.9665
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 410k sentences. Trained similar method at [roberta-base-bne-finetuned-cyberbullying-spanish](https://huggingface.co/JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish)
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-ca-finetuned-ciberbullying-catalan"
bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
bullying_analysis(
"Des que et vaig veure m'en vaig enamorar de tu."
)
# Output:
[{'label': 'Not_bullying', 'score': 0.9996786117553711}]
bullying_analysis(
"Ets tan lletja que et donaven de menjar per sota la porta."
)
# Output:
[{'label': 'Bullying', 'score': 0.9927878975868225}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(CATALAN).ipynb)
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/)
| {"language": "ca", "tags": ["catalan"], "metrics": ["accuracy"], "widget": [{"text": "Ets m\u00e9s petita que un barrufet!!"}, {"text": "Ets tan lletja que et donaven de menjar per sota la porta."}]} | JonatanGk/roberta-base-ca-finetuned-cyberbullying-catalan | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"catalan",
"ca",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ca"
] | TAGS
#transformers #pytorch #roberta #text-classification #catalan #ca #autotrain_compatible #endpoints_compatible #has_space #region-us
| # roberta-base-ca-finetuned-cyberbullying-catalan
This model is a fine-tuned version of BSC-TeMU/roberta-base-ca on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect cyberbullying on Catalan.
It achieves the following results on the evaluation set:
- Loss: 0.1508
- Accuracy: 0.9665
## Training and evaluation data
I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 410k sentences. Trained similar method at roberta-base-bne-finetuned-cyberbullying-spanish
## Training procedure
<details>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
</details>
### Model in action
Fast usage with pipelines:
 to detect cyberbullying on Catalan.\n\nIt achieves the following results on the evaluation set:\n- Loss: 0.1508\n- Accuracy: 0.9665",
"## Training and evaluation data\n\nI use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 410k sentences. Trained similar method at roberta-base-bne-finetuned-cyberbullying-spanish",
"## Training procedure\n\n<details>",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4\n\n</details>",
"### Model in action \n\nFast usage with pipelines:\n\n\n\n to detect cyberbullying on Catalan.\n\nIt achieves the following results on the evaluation set:\n- Loss: 0.1508\n- Accuracy: 0.9665",
"## Training and evaluation data\n\nI use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 410k sentences. Trained similar method at roberta-base-bne-finetuned-cyberbullying-spanish",
"## Training procedure\n\n<details>",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4\n\n</details>",
"### Model in action \n\nFast usage with pipelines:\n\n\n\n to detect cyberbullying on Catalan.\n\nIt achieves the following results on the evaluation set:\n- Loss: 0.1508\n- Accuracy: 0.9665## Training and evaluation data\n\nI use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 410k sentences. Trained similar method at roberta-base-bne-finetuned-cyberbullying-spanish## Training procedure\n\n<details>### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 4\n\n</details>### Model in action \n\nFast usage with pipelines:\n\n\n\n on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4137
- Accuracy: 0.8778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3699 | 1.0 | 1255 | 0.3712 | 0.8669 |
| 0.3082 | 2.0 | 2510 | 0.3401 | 0.8766 |
| 0.2375 | 3.0 | 3765 | 0.4137 | 0.8778 |
| 0.1889 | 4.0 | 5020 | 0.4671 | 0.8733 |
| 0.1486 | 5.0 | 6275 | 0.5205 | 0.8749 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-ca-finetuned-mnli", "results": []}]} | JonatanGk/roberta-base-ca-finetuned-hate-speech-offensive-catalan | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| roberta-base-ca-finetuned-mnli
==============================
This model is a fine-tuned version of BSC-TeMU/roberta-base-ca on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4137
* Accuracy: 0.8778
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
45,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ca-finetuned-mnli
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the tecla dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9354
- Accuracy: 0.7362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8465 | 1.0 | 6888 | 0.8222 | 0.6990 |
| 0.6966 | 2.0 | 13776 | 0.7872 | 0.7157 |
| 0.5643 | 3.0 | 20664 | 0.8060 | 0.7268 |
| 0.4435 | 4.0 | 27552 | 0.8470 | 0.7333 |
| 0.3206 | 5.0 | 34440 | 0.9354 | 0.7362 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tecla"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-ca-finetuned-mnli", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tecla", "type": "tecla", "args": "tecla"}, "metrics": [{"type": "accuracy", "value": 0.7361816335412737, "name": "Accuracy"}]}]}]} | JonatanGk/roberta-base-ca-finetuned-tecla | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:tecla",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-tecla #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| roberta-base-ca-finetuned-mnli
==============================
This model is a fine-tuned version of BSC-TeMU/roberta-base-ca on the tecla dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9354
* Accuracy: 0.7362
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.12.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-tecla #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] | [
56,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #dataset-tecla #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
null | null | This is a dummy model. | {} | JonathanSum/new-dummy-model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| This is a dummy model. | [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers | # Barney Calhoun DialoGPT Model | {"tags": ["conversational"]} | Jonesy/DialoGPT-medium_Barney | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Barney Calhoun DialoGPT Model | [
"# Barney Calhoun DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Barney Calhoun DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Barney Calhoun DialoGPT Model"
] |
text-generation | transformers | # Family Guy DialoGPT Model | {"tags": ["conversational"]} | Jonesy/FG_OLD | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Family Guy DialoGPT Model | [
"# Family Guy DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Family Guy DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Family Guy DialoGPT Model"
] |
text-generation | transformers | # Johnny Test DialoGPT Model | {"tags": ["conversational"]} | Jonesy/DialoGPT-small_JT | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Johnny Test DialoGPT Model | [
"# Johnny Test DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Johnny Test DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Johnny Test DialoGPT Model"
] |
text2text-generation | transformers | This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings trained on 60k Spanish MLSum for summarization.
You can use it with the command "summarize:"
| {"language": "es"} | JorgeSarry/est5-summarize | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"es",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #mt5 #text2text-generation #es #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings trained on 60k Spanish MLSum for summarization.
You can use it with the command "summarize:"
| [] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #es #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #es #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation | transformers | This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings trained on 60k Spanish WikiEdits for sentence simplification.
You can use it with the command "simplify:"
| {"language": "es"} | JorgeSarry/est5base-simplify | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"es",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #mt5 #text2text-generation #es #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings trained on 60k Spanish WikiEdits for sentence simplification.
You can use it with the command "simplify:"
| [] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #es #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #es #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation | transformers | This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings left following the procedure outlined here https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90
The original model has 582M parameters, with 384M of them being input and output embeddings.
After shrinking the sentencepiece vocabulary from 250K to 30K (top 10K English and top 20K Spanish tokens) the number of model parameters reduced to 244M parameters, resulting on a model size reduced from 2.2GB to 0.9GB - 42% of the original one.
| {"language": "es"} | JorgeSarry/est5base | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"es",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #t5 #text2text-generation #es #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| This is a smaller version of the google/mt5-base model with only Spanish and some English embeddings left following the procedure outlined here URL
The original model has 582M parameters, with 384M of them being input and output embeddings.
After shrinking the sentencepiece vocabulary from 250K to 30K (top 10K English and top 20K Spanish tokens) the number of model parameters reduced to 244M parameters, resulting on a model size reduced from 2.2GB to 0.9GB - 42% of the original one.
| [] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #es #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #es #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-ner
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0626
- Precision: 0.9252
- Recall: 0.9330
- F1: 0.9291
- Accuracy: 0.9848
## Model description
More information needed
## limitations
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/albert-base-v2-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("Jorgeutd/albert-base-v2-finetuned-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Scott and I live in Ohio"
ner_results = nlp(example)
print(ner_results)
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 220 | 0.0863 | 0.8827 | 0.8969 | 0.8898 | 0.9773 |
| No log | 2.0 | 440 | 0.0652 | 0.8951 | 0.9199 | 0.9073 | 0.9809 |
| 0.1243 | 3.0 | 660 | 0.0626 | 0.9191 | 0.9208 | 0.9200 | 0.9827 |
| 0.1243 | 4.0 | 880 | 0.0585 | 0.9227 | 0.9281 | 0.9254 | 0.9843 |
| 0.0299 | 5.0 | 1100 | 0.0626 | 0.9252 | 0.9330 | 0.9291 | 0.9848 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": "en", "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "widget": [{"text": "My name is Scott and I live in Columbus."}, {"text": "Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne."}], "base_model": "albert-base-v2", "model-index": [{"name": "albert-base-v2-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9252213840603477, "name": "Precision"}, {"type": "recall", "value": 0.9329732113328189, "name": "Recall"}, {"type": "f1", "value": 0.9290811285541773, "name": "F1"}, {"type": "accuracy", "value": 0.9848205157332728, "name": "Accuracy"}]}]}]} | Jorgeutd/albert-base-v2-finetuned-ner | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:conll2003",
"base_model:albert-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #albert #token-classification #generated_from_trainer #en #dataset-conll2003 #base_model-albert-base-v2 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| albert-base-v2-finetuned-ner
============================
This model is a fine-tuned version of albert-base-v2 on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0626
* Precision: 0.9252
* Recall: 0.9330
* F1: 0.9291
* Accuracy: 0.9848
Model description
-----------------
More information needed
limitations
-----------
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
#### How to use
You can use this model with Transformers *pipeline* for NER.
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.8.1+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #albert #token-classification #generated_from_trainer #en #dataset-conll2003 #base_model-albert-base-v2 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
67,
74,
76,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #albert #token-classification #generated_from_trainer #en #dataset-conll2003 #base_model-albert-base-v2 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.\n\n\nTraining and evaluation data\n----------------------------\n\n\nMore information needed\n\n\nTraining procedure\n------------------### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification | transformers |
## bert-base-uncased
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
- Problem type: Text Classification(adverse drug effects detection).
## Hyperparameters
```json
{
"do_eval": true,
"do_train": true,
"fp16": true,
"load_best_model_at_end": true,
"model_name": "bert-base-uncased",
"num_train_epochs": 10,
"per_device_eval_batch_size": 16,
"per_device_train_batch_size": 16,
"learning_rate":5e-5
}
```
## Validation Metrics
| key | value |
| --- | ----- |
| eval_accuracy | 0.9298021697511167 |
| eval_auc | 0.8902672664394546 |
| eval_f1 | 0.827315541601256 |
| eval_loss | 0.17835010588169098 |
| eval_recall | 0.8234375 |
| eval_precision | 0.831230283911672 |
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I got a rash from taking acetaminophen"}' https://api-inference.huggingface.co/models/Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2
```
""" | {"language": "en", "license": "apache-2.0", "tags": ["sagemaker", "bert-base-uncased", "text classification"], "datasets": ["adecorpusv2"], "widget": [{"text": "I got a rash from taking acetaminophen"}], "model-index": [{"name": "BERT-ade_corpus", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ade_corpus_v2Ade_corpus_v2_classification", "type": "ade_corpus"}, "metrics": [{"type": "accuracy", "value": 92.98, "name": "Validation Accuracy"}, {"type": "f1", "value": 82.73, "name": "Validation F1"}]}]}]} | Jorgeutd/bert-base-uncased-ade-Ade-corpus-v2 | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sagemaker",
"bert-base-uncased",
"text classification",
"en",
"dataset:adecorpusv2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #bert #text-classification #sagemaker #bert-base-uncased #text classification #en #dataset-adecorpusv2 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| bert-base-uncased
-----------------
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
* Problem type: Text Classification(adverse drug effects detection).
Hyperparameters
---------------
Validation Metrics
------------------
Usage
-----
You can use cURL to access this model:
"""
| [] | [
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #sagemaker #bert-base-uncased #text classification #en #dataset-adecorpusv2 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
69
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #sagemaker #bert-base-uncased #text classification #en #dataset-adecorpusv2 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-surveyclassification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on a custom survey dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2818
- Accuracy: 0.9097
- F1: 0.9097
## Model description
More information needed
#### Limitations and bias
This model is limited by its training dataset of survey results for a particular customer service domain. This may not generalize well for all use cases in different domains.
#### How to use
You can use this model with Transformers *pipeline* for Text Classification.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/bert-base-uncased-finetuned-surveyclassification")
model = AutoModelForSequenceClassification.from_pretrained("Jorgeutd/bert-base-uncased-finetuned-surveyclassification")
text_classifier = pipeline("text-classification", model=model,tokenizer=tokenizer, device=0)
example = "The agent on the phone was very helpful and nice to me."
results = text_classifier(example)
print(results)
```
## Training and evaluation data
Custom survey dataset.
## Training procedure
SageMaker notebook instance.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4136 | 1.0 | 902 | 0.2818 | 0.9097 | 0.9097 |
| 0.2213 | 2.0 | 1804 | 0.2990 | 0.9077 | 0.9077 |
| 0.1548 | 3.0 | 2706 | 0.3507 | 0.9026 | 0.9026 |
| 0.1034 | 4.0 | 3608 | 0.4692 | 0.9011 | 0.9011 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": "en", "license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "widget": [{"text": "The agent on the phone was very helpful and nice to me."}], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-finetuned-surveyclassification", "results": []}]} | Jorgeutd/bert-base-uncased-finetuned-surveyclassification | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #bert #text-classification #generated_from_trainer #en #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-base-uncased-finetuned-surveyclassification
================================================
This model is a fine-tuned version of bert-base-uncased on a custom survey dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2818
* Accuracy: 0.9097
* F1: 0.9097
Model description
-----------------
More information needed
#### Limitations and bias
This model is limited by its training dataset of survey results for a particular customer service domain. This may not generalize well for all use cases in different domains.
#### How to use
You can use this model with Transformers *pipeline* for Text Classification.
Training and evaluation data
----------------------------
Custom survey dataset.
Training procedure
------------------
SageMaker notebook instance.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.8.1+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of survey results for a particular customer service domain. This may not generalize well for all use cases in different domains.",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for Text Classification.\n\n\nTraining and evaluation data\n----------------------------\n\n\nCustom survey dataset.\n\n\nTraining procedure\n------------------\n\n\nSageMaker notebook instance.",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #generated_from_trainer #en #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of survey results for a particular customer service domain. This may not generalize well for all use cases in different domains.",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for Text Classification.\n\n\nTraining and evaluation data\n----------------------------\n\n\nCustom survey dataset.\n\n\nTraining procedure\n------------------\n\n\nSageMaker notebook instance.",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
59,
40,
83,
128,
5,
44
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #generated_from_trainer #en #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n#### Limitations and bias\n\n\nThis model is limited by its training dataset of survey results for a particular customer service domain. This may not generalize well for all use cases in different domains.#### How to use\n\n\nYou can use this model with Transformers *pipeline* for Text Classification.\n\n\nTraining and evaluation data\n----------------------------\n\n\nCustom survey dataset.\n\n\nTraining procedure\n------------------\n\n\nSageMaker notebook instance.### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-ner
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0778
- Precision: 0.9505
- Recall: 0.9575
- F1: 0.9540
- Accuracy: 0.9886
## Model description
More information needed
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/bert-large-uncased-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("Jorgeutd/bert-large-uncased-finetuned-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Scott and I live in Ohio"
ner_results = nlp(example)
print(ner_results)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1997 | 1.0 | 878 | 0.0576 | 0.9316 | 0.9257 | 0.9286 | 0.9837 |
| 0.04 | 2.0 | 1756 | 0.0490 | 0.9400 | 0.9513 | 0.9456 | 0.9870 |
| 0.0199 | 3.0 | 2634 | 0.0557 | 0.9436 | 0.9540 | 0.9488 | 0.9879 |
| 0.0112 | 4.0 | 3512 | 0.0602 | 0.9443 | 0.9569 | 0.9506 | 0.9881 |
| 0.0068 | 5.0 | 4390 | 0.0631 | 0.9451 | 0.9589 | 0.9520 | 0.9882 |
| 0.0044 | 6.0 | 5268 | 0.0638 | 0.9510 | 0.9567 | 0.9538 | 0.9885 |
| 0.003 | 7.0 | 6146 | 0.0722 | 0.9495 | 0.9560 | 0.9527 | 0.9885 |
| 0.0016 | 8.0 | 7024 | 0.0762 | 0.9491 | 0.9595 | 0.9543 | 0.9887 |
| 0.0018 | 9.0 | 7902 | 0.0769 | 0.9496 | 0.9542 | 0.9519 | 0.9883 |
| 0.0009 | 10.0 | 8780 | 0.0778 | 0.9505 | 0.9575 | 0.9540 | 0.9886 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": "en", "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "widget": [{"text": "My name is Scott and I live in Columbus."}, {"text": "My name is Scott and I am calling from Buffalo, NY. I would like to file a complain with United Airlines."}, {"text": "Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne."}], "base_model": "bert-large-uncased", "model-index": [{"name": "bert-large-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9504719600222099, "name": "Precision"}, {"type": "recall", "value": 0.9574896520863632, "name": "Recall"}, {"type": "f1", "value": 0.9539679001337494, "name": "F1"}, {"type": "accuracy", "value": 0.9885618059637473, "name": "Accuracy"}]}]}]} | Jorgeutd/bert-large-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:conll2003",
"base_model:bert-large-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #en #dataset-conll2003 #base_model-bert-large-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| bert-large-uncased-finetuned-ner
================================
This model is a fine-tuned version of bert-large-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0778
* Precision: 0.9505
* Recall: 0.9575
* F1: 0.9540
* Accuracy: 0.9886
Model description
-----------------
More information needed
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
#### How to use
You can use this model with Transformers *pipeline* for NER.
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.8.1+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #en #dataset-conll2003 #base_model-bert-large-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.\n\n\nTraining procedure\n------------------",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
71,
74,
41,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #en #dataset-conll2003 #base_model-bert-large-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.\n\n\nTraining procedure\n------------------### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10### Training results### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.8.1+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification | transformers | ## roberta-base
This model is a fine-tuned model that was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
- Problem type: Multi Class Text Classification (emotion detection).
It achieves the following results on the evaluation set:
- Loss: 0.1613253802061081
- f1: 0.9413321705151999
## Hyperparameters
```json
{
"epochs": 10,
"train_batch_size": 16,
"learning_rate": 3e-5,
"weight_decay":0.01,
"load_best_model_at_end": true,
"model_name":"roberta-base",
"do_eval": True,
"load_best_model_at_end":True
}
```
## Validation Metrics
| key | value |
| --- | ----- |
| eval_accuracy | 0.941 |
| eval_f1 | 0.9413321705151999 |
| eval_loss | 0.1613253802061081|
| eval_recall | 0.941 |
| eval_precision | 0.9419519436781406 |
| {"language": "en", "license": "apache-2.0", "tags": ["sagemaker", "roberta-base", "text classification"], "datasets": ["emotion"], "widget": [{"text": "I am really upset that I have to call up to three times to the number on the back of my insurance card for my call to be answer"}], "model-index": [{"name": "sagemaker-roberta-base-emotion", "results": [{"task": {"type": "text-classification", "name": "Multi Class Text Classification"}, "dataset": {"name": "emotion", "type": "emotion"}, "metrics": [{"type": "accuracy", "value": 94.1, "name": "Validation Accuracy"}, {"type": "f1", "value": 94.13, "name": "Validation F1"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.931, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmM1ZmI0NjZhYjdlMWU4NWUwZmFjODFmMmM5MTlhMmEyMmQwOTk2NjQ5ZDNlYmFlMGEyMTY4Y2JiMTcwM2MwNiIsInZlcnNpb24iOjF9.haDbUk1y7nW1e_ext0s1xKefyOzep-XFa1HEkNQEcNV0cHCSRb-0YFakMf5Iee6q_EWFUS-QYxNkgEBlbw3fCQ"}, {"type": "precision", "value": 0.8833042147663716, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjZkOTQyMzkwYjE1ZWQ5YjJkMTEzNmIyZmFlMjkwY2YxNzA3OWE0ZDk5YjJlOWVhOTU5Nzc4ZTk5Mzg5NDcxOCIsInZlcnNpb24iOjF9._XhknNSsiailHiMr1SH9ki7SRswR_b-embALunoCjhBssh9WERkv0z1xpsbw7ORo0wx7WCslZRdJWaQoXOmgDQ"}, {"type": "precision", "value": 0.931, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGY0MTc0ZDBiYmZlYmFmMTcyYjk5MWM0MTRmYTlhY2U1ODY5NTQzNTQ5YjAzN2U0YjljNDAzZDQ5NDBkZDUwYyIsInZlcnNpb24iOjF9.313HYKetR4S4kjcMvEk9Yj2J-Ox8ZqvVk4FLrF6UmxlXYZ4F3put-89BEOxGl_ScugjjAWhKY1pHLPYpKz9PAA"}, {"type": "precision", "value": 0.9337002742192515, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjQ1ZDIzNmE3MjljMTk2NTBmNzcyMTEyOTUwZTljYTA2MjIwY2E4ZThkNGVjYjQwNzU3MTcxMzBiYzJkNWIzOSIsInZlcnNpb24iOjF9.6yXKQ9WS9AWdt1jxixtA5O2S1bcPTKQqIOw291Ytam8OI-zdTI2jwltT6JdU4lHdhTi5797zeNldJMCxGPR2DQ"}, {"type": "recall", "value": 0.9087144572668905, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzJhNTFmNGJkYTAxNzRiOWQ4YzQyMGY5NGQxMjBiMmRjZTA5OTM2ZjM0NWY0ZDJiOTIyODQzZTZkMzEzZmY4YSIsInZlcnNpb24iOjF9.Fy1gkGvRiyANGU6nYgc5QbhccqAfb4PjxEk1EkJAIAZJjs-f0hffwUDlJt_6gRY3KKnoU2kKg1XxpWjybRY7BQ"}, {"type": "recall", "value": 0.931, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTgwYWJmZDAzM2VkOGNjNjY3NjViOTFiMTYyZDc4ZDIzY2VhNTcwMDg3MjdiOTI4Nzc5ODI4N2ExYzY5ODAzMyIsInZlcnNpb24iOjF9.bEW-tZ-5JqkPDDfqkrdvzlzTGEJtYqRACZI1Jv7C8fWkJ8uJj0eQ8TDhcdGGDnFML-q1z3tnkO6PJuK9V2IxAg"}, {"type": "recall", "value": 0.931, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTM2ZDk4NDQ2YWIwM2VjNzUxZjQ0YzU4MzViZGMzYzA3YjlhMTI1NjQwOTM3M2U4NGJhNTMxYzllMjRkMzU2NSIsInZlcnNpb24iOjF9.k9yprOWEoB0-k306GyDGF-g4uw3kABLc8iE_3E5ZYfVbo9VHPo61GuSsWJyYJ7_aq6zWbzgfOFEwUeVjcmnaDA"}, {"type": "f1", "value": 0.8949974527433656, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODg0ZDllYWJkYWZkMWY2NjEzYWIxMWIwMWUyZDhmNWEzM2FmN2E0MWEwOTIyMTM2YTI1MDdmYmRmZWQ5ZmVmNCIsInZlcnNpb24iOjF9.DUD3dfb4vRu-Z9YxvDErJaPLuZIEDBNsdqzkf4ee6dkOCOnYtUhGAybnxtGN1xSYsynXYhU-ymCajWcrVKUCAA"}, {"type": "f1", "value": 0.931, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGU0MTYyOTNjOTBmNzAxNjVlZmQxYmRkMmE5MWY2NzhlNjg0ZGZkMmNmZmI3Zjk1NjJlYTdjMGRhMDMwYzAzNCIsInZlcnNpb24iOjF9.h0wCmhwRT4qRZJcc2zGP3T7dF0_wKdKzTtSVoVWFOUzQZ3RoeY2Hfjl3XA7yyw9KnoDWnLiW8DU_5kOBX-peCQ"}, {"type": "f1", "value": 0.9318434300647934, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmU4OGY4M2NkYWExNjI3Yjk0YmYzNWJjZGQ5ZGNmYzc4ZDk4YzRmZDRiNmRkN2VlNDZhOGIwZDc3MzcxYjVlYiIsInZlcnNpb24iOjF9.qhwi7AV-7NSm1yVd8v1Ea3nTRAFXfqLMwUJ5PUbPSa11jJ0tZNOQVDXHMAD8fVmoueLgZNRUpPVIB881Sq3EBg"}, {"type": "loss", "value": 0.17379647493362427, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDdjODE2MjA5ODg2MmM2OWJmMjMzMzUzNGU1ZDc5NjRkNGU4N2VmNmM2NWE0YTEyYWMxNGUzN2M3YTkxNzUyMCIsInZlcnNpb24iOjF9.qcQWfHuRnfiluicR7gke3vm9u701hB4Bp0YaX2opaxL6d5DRCzuqAg-2kdmhhOL-8DW5JhY6gTrF14AEuEE9Cw"}]}]}]} | Jorgeutd/sagemaker-roberta-base-emotion | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"sagemaker",
"roberta-base",
"text classification",
"en",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #roberta #text-classification #sagemaker #roberta-base #text classification #en #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| roberta-base
------------
This model is a fine-tuned model that was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
* Problem type: Multi Class Text Classification (emotion detection).
It achieves the following results on the evaluation set:
* Loss: 0.1613253802061081
* f1: 0.9413321705151999
Hyperparameters
---------------
Validation Metrics
------------------
| [] | [
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #sagemaker #roberta-base #text classification #en #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
61
] | [
"TAGS\n#transformers #pytorch #safetensors #roberta #text-classification #sagemaker #roberta-base #text classification #en #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n"
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 1
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.743051006476085
si_sdr_imp: 11.293269700616385
sdr: 15.300522933671061
sdr_imp: 11.797860134458015
sir: Infinity
sir_imp: NaN
sar: 15.300522933671061
sar_imp: 11.797860134458015
stoi: 0.9310514162434267
stoi_imp: 0.13513159270288563
```
License notice:
This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/ConvTasNet_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us
|
## Asteroid model 'JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'enh_single' task of the Libri1Mix dataset.
Training config:
Results:
On Libri1Mix min test set :
License notice:
This work "ConvTasNet_Libri1Mix_enhsignle_16k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures
dataset by URL, used under CC BY-NC 4.0 (Research only).
"ConvTasNet_Libri1Mix_enhsignle_16k" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us \n",
"## Asteroid model 'JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
58,
215
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us \n## Asteroid model 'JorisCos/ConvTasNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 15.243671356901526
si_sdr_imp: 15.243034178473609
sdr: 15.668108919568112
sdr_imp: 15.578229918028036
sir: 25.295100756629957
sir_imp: 25.205219921301754
sar: 16.307682590197313
sar_imp: -51.64989963759405
stoi: 0.9394951175291422
stoi_imp: 0.22640192740016568
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri2Mix", "sep_clean"]} | JorisCos/ConvTasNet_Libri2Mix_sepclean_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_clean #license-cc-by-sa-4.0 #has_space #region-us
|
## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepclean_16k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'sep_clean' task of the Libri2Mix dataset.
Training config:
Results :
On Libri2Mix min test set :
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_16k"
is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0. "ConvTasNet_Libri2Mix_sepclean_16k"
is licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris. | [
"## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepclean_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n\nResults :\n\nOn Libri2Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepclean_16k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri2Mix_sepclean_16k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_clean #license-cc-by-sa-4.0 #has_space #region-us \n",
"## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepclean_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n\nResults :\n\nOn Libri2Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepclean_16k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri2Mix_sepclean_16k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] | [
57,
181
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_clean #license-cc-by-sa-4.0 #has_space #region-us \n## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepclean_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n\nResults :\n\nOn Libri2Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepclean_16k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri2Mix_sepclean_16k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepclean_8k`
Imported from [Zenodo](https://zenodo.org/record/3873572#.X9M69cLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri2Mix dataset.
Training config:
```yaml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 2
```
Results :
On Libri2Mix min test set :
```yaml
si_sdr: 14.764543634468069
si_sdr_imp: 14.764029375607246
sdr: 15.29337970745095
sdr_imp: 15.114146605113111
sir: 24.092904661115366
sir_imp: 23.913669683141528
sar: 16.06055906916849
sar_imp: -51.980784441287454
stoi: 0.9311142440593033
stoi_imp: 0.21817376142710482
```
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri2Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri2Mix", "sep_clean"]} | JorisCos/ConvTasNet_Libri2Mix_sepclean_8k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_clean #license-cc-by-sa-4.0 #region-us
|
## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepclean_8k'
Imported from Zenodo
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'sep_clean' task of the Libri2Mix dataset.
Training config:
Results :
On Libri2Mix min test set :
License notice:
This work "ConvTasNet_Libri2Mix_sepclean_8k"
is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0. "ConvTasNet_Libri2Mix_sepclean_8k"
is licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris. | [
"## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepclean_8k'\nImported from Zenodo\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n\nResults :\n\nOn Libri2Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepclean_8k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri2Mix_sepclean_8k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_clean #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepclean_8k'\nImported from Zenodo\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n\nResults :\n\nOn Libri2Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepclean_8k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri2Mix_sepclean_8k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] | [
53,
186
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_clean #license-cc-by-sa-4.0 #region-us \n## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepclean_8k'\nImported from Zenodo\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n\nResults :\n\nOn Libri2Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepclean_8k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri2Mix_sepclean_8k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 2
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 6
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 10.617130949793383
si_sdr_imp: 12.551811412989263
sdr: 11.231867464482065
sdr_imp: 13.059765009747343
sir: 24.461138352988346
sir_imp: 24.371856452307703
sar: 11.5649982725426
sar_imp: 4.662525705768228
stoi: 0.8701085138712695
stoi_imp: 0.2245418019822898
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri2Mix", "sep_noisy"]} | JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #has_space #region-us
|
## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'sep_noisy' task of the Libri2Mix dataset.
Training config:
Results:
On Libri2Mix min test set :
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_16k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used underCC BY 4.0; of The WSJ0 Hipster Ambient Mixtures
dataset by URL, used under CC BY-NC 4.0 (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_16k" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\n\nOn Libri2Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepnoisy_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused underCC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri2Mix_sepnoisy_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #has_space #region-us \n",
"## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\n\nOn Libri2Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepnoisy_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused underCC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri2Mix_sepnoisy_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
57,
214
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #has_space #region-us \n## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepnoisy_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\n\nOn Libri2Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepnoisy_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused underCC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri2Mix_sepnoisy_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k`
Imported from [Zenodo](https://zenodo.org/record/3874420#.X9I6NcLjJH4)
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri2Mix dataset.
Training config:
```yml
data:
n_src: 2
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: True
epochs: 200
half_lr: True
num_workers: 4
```
Results:
On Libri2Mix min test set :
```yml
si_sdr: 9.944424856077259
si_sdr_imp: 11.939395359731192
sdr: 10.701526190782072
sdr_imp: 12.481757547845662
sir: 22.633644975545575
sir_imp: 22.45666740833025
sar: 11.131644100944868
sar_imp: 4.248489589311784
stoi: 0.852048619949357
stoi_imp: 0.2071994899565506
```
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_8k" is licensed under A[Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri2Mix", "sep_noisy"]} | JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri2Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #region-us
|
## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k'
Imported from Zenodo
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'sep_noisy' task of the Libri2Mix dataset.
Training config:
Results:
On Libri2Mix min test set :
License notice:
This work "ConvTasNet_Libri2Mix_sepnoisy_8k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures
dataset by URL, used under CC BY-NC 4.0 (Research only).
"ConvTasNet_Libri2Mix_sepnoisy_8k" is licensed under AAttribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k'\nImported from Zenodo\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri2Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepnoisy_8k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri2Mix_sepnoisy_8k\" is licensed under AAttribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k'\nImported from Zenodo\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri2Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepnoisy_8k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri2Mix_sepnoisy_8k\" is licensed under AAttribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
53,
220
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri2Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #region-us \n## Asteroid model 'JorisCos/ConvTasNet_Libri2Mix_sepnoisy_8k'\nImported from Zenodo\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri2Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri2Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri2Mix_sepnoisy_8k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri2Mix_sepnoisy_8k\" is licensed under AAttribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yaml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_clean
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.932601610824145
si_sdr_imp: 12.299341066588594
sdr: 9.557260814240447
sdr_imp: 12.76957128385349
sir: 17.387646884037455
sir_imp: 20.599955591768484
sar: 10.686885056960504
sar_imp: -55.8894643263213
stoi: 0.8481258332025354
stoi_imp: 0.25528367853750356
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_16k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_16k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri3Mix", "sep_clean"]} | JorisCos/ConvTasNet_Libri3Mix_sepclean_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_clean #license-cc-by-sa-4.0 #region-us
|
## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepclean_16k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'sep_clean' task of the Libri3Mix dataset.
Training config:
Results :
On Libri3Mix min test set :
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_16k"
is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0. "ConvTasNet_Libri3Mix_sepclean_16k"
is licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris. | [
"## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepclean_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri3Mix dataset.\n\nTraining config:\n\n\n\nResults :\n\nOn Libri3Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepclean_16k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri3Mix_sepclean_16k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_clean #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepclean_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri3Mix dataset.\n\nTraining config:\n\n\n\nResults :\n\nOn Libri3Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepclean_16k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri3Mix_sepclean_16k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] | [
53,
181
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_clean #license-cc-by-sa-4.0 #region-us \n## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepclean_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri3Mix dataset.\n\nTraining config:\n\n\n\nResults :\n\nOn Libri3Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepclean_16k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri3Mix_sepclean_16k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepclean_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_clean` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_clean
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results :
On Libri3Mix min test set :
```yaml
si_sdr: 8.581797049575108
si_sdr_imp: 11.977037288467368
sdr' 9.305885208641385
sdr_imp: 12.3943409734845
sir: 16.42030534048559
sir_imp: 19.508759460400984
sar: 10.641943911079238
sar_imp: -56.4345187842095
stoi: 0.8365148408724333
stoi_imp: 0.24401766199806396
```
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_8k"
is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). "ConvTasNet_Libri3Mix_sepclean_8k"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Cosentino Joris. | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri3Mix", "sep_clean"]} | JorisCos/ConvTasNet_Libri3Mix_sepclean_8k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_clean #license-cc-by-sa-4.0 #region-us
|
## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepclean_8k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'sep_clean' task of the Libri3Mix dataset.
Training config:
Results :
On Libri3Mix min test set :
License notice:
This work "ConvTasNet_Libri3Mix_sepclean_8k"
is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0. "ConvTasNet_Libri3Mix_sepclean_8k"
is licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris. | [
"## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepclean_8k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri3Mix dataset.\n\nTraining config:\n\n\nResults :\n\nOn Libri3Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepclean_8k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri3Mix_sepclean_8k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_clean #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepclean_8k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri3Mix dataset.\n\nTraining config:\n\n\nResults :\n\nOn Libri3Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepclean_8k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri3Mix_sepclean_8k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] | [
53,
181
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_clean #license-cc-by-sa-4.0 #region-us \n## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepclean_8k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid. \nIt was trained on the 'sep_clean' task of the Libri3Mix dataset.\n\nTraining config:\n\n\nResults :\n\nOn Libri3Mix min test set :\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepclean_8k\" \nis a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0. \"ConvTasNet_Libri3Mix_sepclean_8k\" \nis licensed under Attribution-ShareAlike 3.0 Unported by Cosentino Joris."
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 16000
segment: 3
task: sep_noisy
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 32
n_filters: 512
stride: 16
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.926151147554517
si_sdr_imp: 10.282912158535625
sdr: 6.700975236867358
sdr_imp: 10.882972447337504
sir: 15.364110064569388
sir_imp: 18.574476587171688
sar: 7.918866830474568
sar_imp: -0.9638973409971135
stoi: 0.7713777027310713
stoi_imp: 0.2078696167973911
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
"ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri3Mix", "sep_noisy"]} | JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #region-us
|
## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'sep_noisy' task of the Libri3Mix dataset.
Training config:
Results:
On Libri3Mix min test set :
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures
dataset by URL, used under CC BY-NC 4.0.
"ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri3Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri3Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepnoisy_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0. \n\"ConvTasNet_Libri3Mix_sepnoisy_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri3Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri3Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepnoisy_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0. \n\"ConvTasNet_Libri3Mix_sepnoisy_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
53,
210
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #region-us \n## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri3Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri3Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepnoisy_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0. \n\"ConvTasNet_Libri3Mix_sepnoisy_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `sep_noisy` task of the Libri3Mix dataset.
Training config:
```yml
data:
n_src: 3
sample_rate: 8000
segment: 3
task: sep_noisy
train_dir: data/wav8k/min/train-360
valid_dir: data/wav8k/min/dev
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
masknet:
bn_chan: 128
hid_chan: 512
mask_act: relu
n_blocks: 8
n_repeats: 3
n_src: 3
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
training:
batch_size: 24
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On Libri3Mix min test set :
```yml
si_sdr: 5.978836560066222
si_sdr_imp: 10.388889689413096
sdr: 6.8651365291740225
sdr_imp: 10.928018056925016
sir: 14.997089638783114
sir_imp: 18.08248357801549
sar: 8.127504792061933
sar_imp: -0.7869320540959925
stoi: 0.7669414686111115
stoi_imp: 0.20416563213078837
```
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"ConvTasNet_Libri3Mix_sepnoisy_8k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "ConvTasNet", "audio-to-audio"], "datasets": ["Libri3Mix", "sep_noisy"]} | JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k | null | [
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"dataset:Libri3Mix",
"dataset:sep_noisy",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #region-us
|
## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'sep_noisy' task of the Libri3Mix dataset.
Training config:
Results:
On Libri3Mix min test set :
License notice:
This work "ConvTasNet_Libri3Mix_sepnoisy_8k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures
dataset by URL, used under CC BY-NC 4.0 (Research only).
"ConvTasNet_Libri3Mix_sepnoisy_8k" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri3Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri3Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepnoisy_8k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri3Mix_sepnoisy_8k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri3Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri3Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepnoisy_8k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri3Mix_sepnoisy_8k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
53,
214
] | [
"TAGS\n#asteroid #pytorch #audio #ConvTasNet #audio-to-audio #dataset-Libri3Mix #dataset-sep_noisy #license-cc-by-sa-4.0 #region-us \n## Asteroid model 'JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'sep_noisy' task of the Libri3Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri3Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"ConvTasNet_Libri3Mix_sepnoisy_8k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"ConvTasNet_Libri3Mix_sepnoisy_8k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/DCCRNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_kernel_size: 400
stft_n_filters: 512
stft_stride: 100
masknet:
architecture: DCCRN-CL
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 12
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.329767398333798
si_sdr_imp: 9.879986092474098
sdr: 13.87279932997016
sdr_imp: 10.370136530757103
sir: Infinity
sir_imp: NaN
sar: 13.87279932997016
sar_imp: 10.370136530757103
stoi: 0.9140907015623948
stoi_imp: 0.11817087802185405
```
License notice:
This work "DCCRNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCCRNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "DCCRNet", "audio-to-audio", "speech-enhancement"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/DCCRNet_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"DCCRNet",
"audio-to-audio",
"speech-enhancement",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #DCCRNet #audio-to-audio #speech-enhancement #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us
|
## Asteroid model 'JorisCos/DCCRNet_Libri1Mix_enhsignle_16k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'enh_single' task of the Libri1Mix dataset.
Training config:
Results:
On Libri1Mix min test set :
License notice:
This work "DCCRNet_Libri1Mix_enhsignle_16k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures
dataset by URL, used under CC BY-NC 4.0 (Research only).
"DCCRNet_Libri1Mix_enhsignle_16k" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/DCCRNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DCCRNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DCCRNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #DCCRNet #audio-to-audio #speech-enhancement #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us \n",
"## Asteroid model 'JorisCos/DCCRNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DCCRNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DCCRNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
61,
212
] | [
"TAGS\n#asteroid #pytorch #audio #DCCRNet #audio-to-audio #speech-enhancement #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us \n## Asteroid model 'JorisCos/DCCRNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DCCRNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DCCRNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/DCUNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
stft_n_filters: 1024
stft_kernel_size: 1024
stft_stride: 256
masknet:
architecture: Large-DCUNet-20
fix_length_mode: pad
n_src: 1
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 13.154035391645971
si_sdr_imp: 9.704254085786271
sdr: 13.568058873121435
sdr_imp: 10.065396073908367
sar: 13.568058873121435
sar_imp: 10.065396073908367
stoi: 0.9199373340235417
stoi_imp: 0.12401751048300132
```
License notice:
This work "DCUNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DCUNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "DCUNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/DCUNet_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"DCUNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #DCUNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us
|
## Asteroid model 'JorisCos/DCUNet_Libri1Mix_enhsignle_16k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'enh_single' task of the Libri1Mix dataset.
Training config:
Results:
On Libri1Mix min test set :
License notice:
This work "DCUNet_Libri1Mix_enhsignle_16k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures
dataset by URL, used under CC BY-NC 4.0 (Research only).
"DCUNet_Libri1Mix_enhsignle_16k" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/DCUNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DCUNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DCUNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #DCUNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us \n",
"## Asteroid model 'JorisCos/DCUNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DCUNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DCUNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
57,
212
] | [
"TAGS\n#asteroid #pytorch #audio #DCUNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us \n## Asteroid model 'JorisCos/DCUNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DCUNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DCUNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 1
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 2
n_filters: 64
stride: 1
masknet:
bidirectional: true
bn_chan: 128
chunk_size: 250
dropout: 0
hid_size: 128
hop_size: 125
in_chan: 64
mask_act: sigmoid
n_repeats: 6
n_src: 1
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
training:
batch_size: 2
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.7228101708889
si_sdr_imp: 11.2730288650292
sdr: 15.35661405197161
sdr_imp: 11.853951252758595
sir: Infinity
sir_imp: NaN
sar: 15.35661405197161
sar_imp: 11.853951252758595
stoi: 0.9300461826351578
stoi_imp: 0.13412635909461715
```
License notice:
This work "DPRNNTasNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPRNNTasNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "DPRNNTasNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/DPRNNTasNet-ks2_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"DPRNNTasNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #DPRNNTasNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #region-us
|
## Asteroid model 'JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'enh_single' task of the Libri1Mix dataset.
Training config:
Results:
On Libri1Mix min test set :
License notice:
This work "DPRNNTasNet_Libri1Mix_enhsignle_16k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures
dataset by URL, used under CC BY-NC 4.0 (Research only).
"DPRNNTasNet_Libri1Mix_enhsignle_16k" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DPRNNTasNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DPRNNTasNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #DPRNNTasNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model 'JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DPRNNTasNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DPRNNTasNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
55,
218
] | [
"TAGS\n#asteroid #pytorch #audio #DPRNNTasNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #region-us \n## Asteroid model 'JorisCos/DPRNNTasNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DPRNNTasNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DPRNNTasNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
audio-to-audio | asteroid |
## Asteroid model `JorisCos/DPTNet_Libri1Mix_enhsignle_16k`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
n_src: 1
sample_rate: 16000
segment: 3
task: enh_single
train_dir: data/wav16k/min/train-360
valid_dir: data/wav16k/min/dev
filterbank:
kernel_size: 16
n_filters: 64
stride: 8
masknet:
bidirectional: true
chunk_size: 100
dropout: 0
ff_activation: relu
ff_hid: 256
hop_size: 50
in_chan: 64
mask_act: sigmoid
n_repeats: 2
n_src: 1
norm_type: gLN
out_chan: 64
optim:
lr: 0.001
optimizer: adam
weight_decay: 1.0e-05
scheduler:
d_model: 64
steps_per_epoch: 10000
training:
batch_size: 4
early_stop: true
epochs: 200
gradient_clipping: 5
half_lr: true
num_workers: 4
```
Results:
On Libri1Mix min test set :
```yml
si_sdr: 14.829670037349064
si_sdr_imp: 11.379888731489366
sdr: 15.395712644737149
sdr_imp: 11.893049845524112
sir: Infinity
sir_imp: NaN
sar: 15.395712644737149
sar_imp: 11.893049845524112
stoi: 0.9301948391058859
stoi_imp: 0.13427501556534832
```
License notice:
This work "DPTNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures
dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only).
"DPTNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "DPTNet", "audio-to-audio"], "datasets": ["Libri1Mix", "enh_single"]} | JorisCos/DPTNet_Libri1Mix_enhsingle_16k | null | [
"asteroid",
"pytorch",
"audio",
"DPTNet",
"audio-to-audio",
"dataset:Libri1Mix",
"dataset:enh_single",
"license:cc-by-sa-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #DPTNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us
|
## Asteroid model 'JorisCos/DPTNet_Libri1Mix_enhsignle_16k'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'enh_single' task of the Libri1Mix dataset.
Training config:
Results:
On Libri1Mix min test set :
License notice:
This work "DPTNet_Libri1Mix_enhsignle_16k" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures
dataset by URL, used under CC BY-NC 4.0 (Research only).
"DPTNet_Libri1Mix_enhsignle_16k" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/DPTNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DPTNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DPTNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #DPTNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us \n",
"## Asteroid model 'JorisCos/DPTNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DPTNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DPTNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
57,
212
] | [
"TAGS\n#asteroid #pytorch #audio #DPTNet #audio-to-audio #dataset-Libri1Mix #dataset-enh_single #license-cc-by-sa-4.0 #has_space #region-us \n## Asteroid model 'JorisCos/DPTNet_Libri1Mix_enhsignle_16k'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn Libri1Mix min test set :\n\n\n\nLicense notice:\n\nThis work \"DPTNet_Libri1Mix_enhsignle_16k\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The WSJ0 Hipster Ambient Mixtures \ndataset by URL, used under CC BY-NC 4.0 (Research only). \n\"DPTNet_Libri1Mix_enhsignle_16k\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
null | asteroid |
## Asteroid model `JorisCos/VAD_Net`
Description:
This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid).
It was trained on the `enh_single` task of the Libri1Mix dataset.
Training config:
```yml
data:
segment: 3
train_dir: /home/jcosentino/VAD_dataset/metadata/sets/train.json
valid_dir: /home/jcosentino/VAD_dataset/metadata/sets/dev.json
filterbank:
kernel_size: 16
n_filters: 512
stride: 8
main_args:
exp_dir: exp/full_not_causal_f1/
help: null
masknet:
bn_chan: 128
causal: false
hid_chan: 512
mask_act: relu
n_blocks: 3
n_repeats: 5
skip_chan: 128
optim:
lr: 0.001
optimizer: adam
weight_decay: 0.0
positional arguments: {}
training:
batch_size: 8
early_stop: true
epochs: 200
half_lr: true
num_workers: 4
```
Results:
On LibriVAD min test set :
```yml
accuracy: 0.8196149023502931,
precision: 0.8305009048356607,
recall: 0.8869202491310206,
f1_score: 0.8426184545700124
```
License notice:
This work "VAD_Net" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov,
used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The [DNS challenge](https://github.com/microsoft/DNS-Challenge) noises, [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/).
"VAD_Net" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "VADNet", "VAD", "Voice Activity Detection"], "datasets": ["LibriVAD"]} | JorisCos/VAD_Net | null | [
"asteroid",
"pytorch",
"audio",
"VADNet",
"VAD",
"Voice Activity Detection",
"dataset:LibriVAD",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #VADNet #VAD #Voice Activity Detection #dataset-LibriVAD #license-cc-by-sa-4.0 #region-us
|
## Asteroid model 'JorisCos/VAD_Net'
Description:
This model was trained by Joris Cosentino using the librimix recipe in Asteroid.
It was trained on the 'enh_single' task of the Libri1Mix dataset.
Training config:
Results:
On LibriVAD min test set :
License notice:
This work "VAD_Net" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,
used under CC BY 4.0; of The DNS challenge noises, Attribution-ShareAlike 3.0 Unported.
"VAD_Net" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino | [
"## Asteroid model 'JorisCos/VAD_Net'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn LibriVAD min test set :\n\n\n\nLicense notice:\n\nThis work \"VAD_Net\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The DNS challenge noises, Attribution-ShareAlike 3.0 Unported.\n\"VAD_Net\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
"TAGS\n#asteroid #pytorch #audio #VADNet #VAD #Voice Activity Detection #dataset-LibriVAD #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model 'JorisCos/VAD_Net'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn LibriVAD min test set :\n\n\n\nLicense notice:\n\nThis work \"VAD_Net\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The DNS challenge noises, Attribution-ShareAlike 3.0 Unported.\n\"VAD_Net\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] | [
45,
162
] | [
"TAGS\n#asteroid #pytorch #audio #VADNet #VAD #Voice Activity Detection #dataset-LibriVAD #license-cc-by-sa-4.0 #region-us \n## Asteroid model 'JorisCos/VAD_Net'\n\nDescription:\n\nThis model was trained by Joris Cosentino using the librimix recipe in Asteroid.\nIt was trained on the 'enh_single' task of the Libri1Mix dataset.\n\nTraining config:\n\n\n \n\nResults:\n\nOn LibriVAD min test set :\n\n\n\nLicense notice:\n\nThis work \"VAD_Net\" is a derivative of LibriSpeech ASR corpus by Vassil Panayotov,\nused under CC BY 4.0; of The DNS challenge noises, Attribution-ShareAlike 3.0 Unported.\n\"VAD_Net\" is licensed under Attribution-ShareAlike 3.0 Unported by Joris Cosentino"
] |
text2text-generation | transformers | # BART_Finetuned_CNN_dailymail
The following repo contains a [bart-base](https://huggingface.co/facebook/bart-base) model that was finetuned using the dataset [cnn_dailymail](https://huggingface.co/datasets/cnn_dailymail) | {} | Josmar/BART_Finetuned_CNN_dailymail | null | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| # BART_Finetuned_CNN_dailymail
The following repo contains a bart-base model that was finetuned using the dataset cnn_dailymail | [
"# BART_Finetuned_CNN_dailymail\nThe following repo contains a bart-base model that was finetuned using the dataset cnn_dailymail"
] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"# BART_Finetuned_CNN_dailymail\nThe following repo contains a bart-base model that was finetuned using the dataset cnn_dailymail"
] | [
30,
34
] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n# BART_Finetuned_CNN_dailymail\nThe following repo contains a bart-base model that was finetuned using the dataset cnn_dailymail"
] |
translation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-fr
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.0+cpu
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "mit", "tags": ["translation", "generated_from_trainer"], "datasets": ["kde4"], "model-index": [{"name": "m2m100_418M-fr", "results": []}]} | Jour/m2m100_418M-fr | null | [
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #m2m_100 #text2text-generation #translation #generated_from_trainer #dataset-kde4 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# m2m100_418M-fr
This model is a fine-tuned version of facebook/m2m100_418M on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.0+cpu
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"# m2m100_418M-fr\n\nThis model is a fine-tuned version of facebook/m2m100_418M on the kde4 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.9.0+cpu\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #m2m_100 #text2text-generation #translation #generated_from_trainer #dataset-kde4 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# m2m100_418M-fr\n\nThis model is a fine-tuned version of facebook/m2m100_418M on the kde4 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.9.0+cpu\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] | [
55,
36,
7,
9,
9,
4,
93,
42
] | [
"TAGS\n#transformers #pytorch #tensorboard #m2m_100 #text2text-generation #translation #generated_from_trainer #dataset-kde4 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# m2m100_418M-fr\n\nThis model is a fine-tuned version of facebook/m2m100_418M on the kde4 dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.9.0+cpu\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
text-generation | transformers |
# Morty DialoGPT Model | {"tags": ["conversational"]} | Julianqll/DialoGPT-small-finalmorty | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Morty DialoGPT Model | [
"# Morty DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Morty DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Morty DialoGPT Model"
] |
text-generation | transformers |
# Rick Sanchez DialoGPT Model | {"tags": ["conversational"]} | Julianqll/DialoGPT-small-ricksanchez | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Sanchez DialoGPT Model | [
"# Rick Sanchez DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Sanchez DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Rick Sanchez DialoGPT Model"
] |
text-classification | transformers | ## Model description
This model was trained on the XED dataset and achieved
validation loss: 0.5995
validation acc: 84.28% (ROC-AUC)
Labels are based on Plutchik's model of emotions and may be combined:

### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.8.0
- Tokenizers 0.10.3
| {} | JuliusAlphonso/dear-jarvis-monolith-xed-en | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| ## Model description
This model was trained on the XED dataset and achieved
validation loss: 0.5995
validation acc: 84.28% (ROC-AUC)
Labels are based on Plutchik's model of emotions and may be combined:
!image
### Framework versions
- Transformers 4.6.1
- Pytorch 1.8.1+cu101
- Datasets 1.8.0
- Tokenizers 0.10.3
| [
"## Model description\nThis model was trained on the XED dataset and achieved \nvalidation loss: 0.5995 \nvalidation acc: 84.28% (ROC-AUC) \n\nLabels are based on Plutchik's model of emotions and may be combined:\n!image",
"### Framework versions\n- Transformers 4.6.1\n- Pytorch 1.8.1+cu101\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"## Model description\nThis model was trained on the XED dataset and achieved \nvalidation loss: 0.5995 \nvalidation acc: 84.28% (ROC-AUC) \n\nLabels are based on Plutchik's model of emotions and may be combined:\n!image",
"### Framework versions\n- Transformers 4.6.1\n- Pytorch 1.8.1+cu101\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] | [
30,
57,
44
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n## Model description\nThis model was trained on the XED dataset and achieved \nvalidation loss: 0.5995 \nvalidation acc: 84.28% (ROC-AUC) \n\nLabels are based on Plutchik's model of emotions and may be combined:\n!image### Framework versions\n- Transformers 4.6.1\n- Pytorch 1.8.1+cu101\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dear-jarvis-v5
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3148
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 470 | 0.3106 |
| 0.3452 | 2.0 | 940 | 0.3064 |
| 0.2692 | 3.0 | 1410 | 0.3148 |
### Framework versions
- Transformers 4.7.0
- Pytorch 1.9.0+cu102
- Datasets 1.8.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "datasets": [], "model_index": [{"name": "dear-jarvis-v5", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]} | JuliusAlphonso/dear-jarvis-v5 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| dear-jarvis-v5
==============
This model is a fine-tuned version of distilbert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3148
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.7.0
* Pytorch 1.9.0+cu102
* Datasets 1.8.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.7.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.8.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.7.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.8.0\n* Tokenizers 0.10.3"
] | [
38,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.7.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.8.0\n* Tokenizers 0.10.3"
] |
text-classification | transformers | Labels are based on Plutchik's model of emotions and may be combined:
 | {} | JuliusAlphonso/distilbert-plutchik | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| Labels are based on Plutchik's model of emotions and may be combined:
!image | [] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
30
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7470
- Matthews Correlation: 0.5414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5237 | 1.0 | 535 | 0.5327 | 0.4248 |
| 0.347 | 2.0 | 1070 | 0.5105 | 0.5239 |
| 0.2344 | 3.0 | 1605 | 0.6639 | 0.5224 |
| 0.1672 | 4.0 | 2140 | 0.7470 | 0.5414 |
| 0.1228 | 5.0 | 2675 | 0.8352 | 0.5377 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.541356878970505, "name": "Matthews Correlation"}]}]}]} | Jungwoo/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7470
* Matthews Correlation: 0.5414
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.12.2
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
56,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5### Training results### Framework versions\n\n\n* Transformers 4.12.2\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
null | asteroid | ## Asteroid model
## Description:
- Code: The code corresponding to this pretrained model can be found [here](https://github.com/asteroid-team/asteroid/tree/master/egs/wsj0-mix-var/Multi-Decoder-DPRNN).
- Notebook: Colab Notebook with examples can be found [here](https://colab.research.google.com/drive/11MGx3_sgOrQrB6k8edyAvg5mGIxqR5ED?usp=sharing)
- [Paper](http://www.isle.illinois.edu/speech_web_lg/pubs/2021/zhu2021multi.pdf): "Multi-Decoder DPRNN: High Accuracy Source Counting and Separation", Junzhe Zhu, Raymond Yeh, Mark Hasegawa-Johnson. ICASSP(2021).
- Summary: This model achieves SOTA on the problem of source separation with an unknown number of speakers. It uses multiple decoder heads(each tackling a distinct number of speakers), in addition to a classifier head that selects which decoder head to use.
- [Project Page](https://junzhejosephzhu.github.io/Multi-Decoder-DPRNN/)
- [Original research repo](https://github.com/JunzheJosephZhu/MultiDecoder-DPRNN)
This model was trained by Joseph Zhu using the wsj0-mix-var/Multi-Decoder-DPRNN recipe in Asteroid.
It was trained on the `sep_count` task of the Wsj0MixVar dataset.
## Training config:
```yaml
filterbank:
n_filters: 64
kernel_size: 8
stride: 4
masknet:
n_srcs: [2, 3, 4, 5]
bn_chan: 128
hid_size: 128
chunk_size: 128
hop_size: 64
n_repeats: 8
mask_act: 'sigmoid'
bidirectional: true
dropout: 0
use_mulcat: false
training:
epochs: 200
batch_size: 2
num_workers: 2
half_lr: yes
lr_decay: yes
early_stop: yes
gradient_clipping: 5
optim:
optimizer: adam
lr: 0.001
weight_decay: 0.00000
data:
train_dir: "data/{}speakers/wav8k/min/tr"
valid_dir: "data/{}speakers/wav8k/min/cv"
task: sep_count
sample_rate: 8000
seglen: 4.0
minlen: 2.0
loss:
lambda: 0.05
```
## Results:
```yaml
'Accuracy': 0.9723333333333334, 'P-Si-SNR': 10.36027378628496
```
### License notice:
This work "MultiDecoderDPRNN" is a derivative of [CSR-I (WSJ0) Complete](https://catalog.ldc.upenn.edu/LDC93S6A)
by [LDC](https://www.ldc.upenn.edu/), used under [LDC User Agreement for
Non-Members](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf) (Research only).
"MultiDecoderDPRNN" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Joseph Zhu.
| {"license": "cc-by-sa-4.0", "tags": ["asteroid", "audio", "MultiDecoderDPRNN"], "datasets": ["Wsj0MixVar", "sep_clean"]} | JunzheJosephZhu/MultiDecoderDPRNN | null | [
"asteroid",
"pytorch",
"audio",
"MultiDecoderDPRNN",
"dataset:Wsj0MixVar",
"dataset:sep_clean",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#asteroid #pytorch #audio #MultiDecoderDPRNN #dataset-Wsj0MixVar #dataset-sep_clean #license-cc-by-sa-4.0 #region-us
| ## Asteroid model
## Description:
- Code: The code corresponding to this pretrained model can be found here.
- Notebook: Colab Notebook with examples can be found here
- Paper: "Multi-Decoder DPRNN: High Accuracy Source Counting and Separation", Junzhe Zhu, Raymond Yeh, Mark Hasegawa-Johnson. ICASSP(2021).
- Summary: This model achieves SOTA on the problem of source separation with an unknown number of speakers. It uses multiple decoder heads(each tackling a distinct number of speakers), in addition to a classifier head that selects which decoder head to use.
- Project Page
- Original research repo
This model was trained by Joseph Zhu using the wsj0-mix-var/Multi-Decoder-DPRNN recipe in Asteroid.
It was trained on the 'sep_count' task of the Wsj0MixVar dataset.
## Training config:
## Results:
### License notice:
This work "MultiDecoderDPRNN" is a derivative of CSR-I (WSJ0) Complete
by LDC, used under LDC User Agreement for
Non-Members (Research only).
"MultiDecoderDPRNN" is licensed under Attribution-ShareAlike 3.0 Unported
by Joseph Zhu.
| [
"## Asteroid model",
"## Description:\n- Code: The code corresponding to this pretrained model can be found here.\n\n- Notebook: Colab Notebook with examples can be found here\n\n- Paper: \"Multi-Decoder DPRNN: High Accuracy Source Counting and Separation\", Junzhe Zhu, Raymond Yeh, Mark Hasegawa-Johnson. ICASSP(2021). \n\n- Summary: This model achieves SOTA on the problem of source separation with an unknown number of speakers. It uses multiple decoder heads(each tackling a distinct number of speakers), in addition to a classifier head that selects which decoder head to use.\n\n- Project Page\n\n- Original research repo\n\nThis model was trained by Joseph Zhu using the wsj0-mix-var/Multi-Decoder-DPRNN recipe in Asteroid. \nIt was trained on the 'sep_count' task of the Wsj0MixVar dataset.",
"## Training config:",
"## Results:",
"### License notice:\nThis work \"MultiDecoderDPRNN\" is a derivative of CSR-I (WSJ0) Complete\nby LDC, used under LDC User Agreement for \nNon-Members (Research only). \n\"MultiDecoderDPRNN\" is licensed under Attribution-ShareAlike 3.0 Unported\nby Joseph Zhu."
] | [
"TAGS\n#asteroid #pytorch #audio #MultiDecoderDPRNN #dataset-Wsj0MixVar #dataset-sep_clean #license-cc-by-sa-4.0 #region-us \n",
"## Asteroid model",
"## Description:\n- Code: The code corresponding to this pretrained model can be found here.\n\n- Notebook: Colab Notebook with examples can be found here\n\n- Paper: \"Multi-Decoder DPRNN: High Accuracy Source Counting and Separation\", Junzhe Zhu, Raymond Yeh, Mark Hasegawa-Johnson. ICASSP(2021). \n\n- Summary: This model achieves SOTA on the problem of source separation with an unknown number of speakers. It uses multiple decoder heads(each tackling a distinct number of speakers), in addition to a classifier head that selects which decoder head to use.\n\n- Project Page\n\n- Original research repo\n\nThis model was trained by Joseph Zhu using the wsj0-mix-var/Multi-Decoder-DPRNN recipe in Asteroid. \nIt was trained on the 'sep_count' task of the Wsj0MixVar dataset.",
"## Training config:",
"## Results:",
"### License notice:\nThis work \"MultiDecoderDPRNN\" is a derivative of CSR-I (WSJ0) Complete\nby LDC, used under LDC User Agreement for \nNon-Members (Research only). \n\"MultiDecoderDPRNN\" is licensed under Attribution-ShareAlike 3.0 Unported\nby Joseph Zhu."
] | [
51,
4,
192,
7,
4,
78
] | [
"TAGS\n#asteroid #pytorch #audio #MultiDecoderDPRNN #dataset-Wsj0MixVar #dataset-sep_clean #license-cc-by-sa-4.0 #region-us \n## Asteroid model## Description:\n- Code: The code corresponding to this pretrained model can be found here.\n\n- Notebook: Colab Notebook with examples can be found here\n\n- Paper: \"Multi-Decoder DPRNN: High Accuracy Source Counting and Separation\", Junzhe Zhu, Raymond Yeh, Mark Hasegawa-Johnson. ICASSP(2021). \n\n- Summary: This model achieves SOTA on the problem of source separation with an unknown number of speakers. It uses multiple decoder heads(each tackling a distinct number of speakers), in addition to a classifier head that selects which decoder head to use.\n\n- Project Page\n\n- Original research repo\n\nThis model was trained by Joseph Zhu using the wsj0-mix-var/Multi-Decoder-DPRNN recipe in Asteroid. \nIt was trained on the 'sep_count' task of the Wsj0MixVar dataset.## Training config:## Results:### License notice:\nThis work \"MultiDecoderDPRNN\" is a derivative of CSR-I (WSJ0) Complete\nby LDC, used under LDC User Agreement for \nNon-Members (Research only). \n\"MultiDecoderDPRNN\" is licensed under Attribution-ShareAlike 3.0 Unported\nby Joseph Zhu."
] |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29016523
- CO2 Emissions (in grams): 3.273303707756322
## Validation Metrics
- Loss: 0.6093757748603821
- Accuracy: 0.8333333333333334
- Macro F1: 0.7937936978656889
- Micro F1: 0.8333333333333334
- Weighted F1: 0.8239843785760546
- Macro Precision: 0.8988882462566673
- Micro Precision: 0.8333333333333334
- Weighted Precision: 0.8404982541824647
- Macro Recall: 0.7805142534864643
- Micro Recall: 0.8333333333333334
- Weighted Recall: 0.8333333333333334
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Jush/autonlp-bp-29016523
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Jush/autonlp-bp-29016523", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Jush/autonlp-bp-29016523", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["Jush/autonlp-data-bp"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 3.273303707756322} | JushBJJ/autonlp-bp-29016523 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Jush/autonlp-data-bp",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Jush/autonlp-data-bp #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29016523
- CO2 Emissions (in grams): 3.273303707756322
## Validation Metrics
- Loss: 0.6093757748603821
- Accuracy: 0.8333333333333334
- Macro F1: 0.7937936978656889
- Micro F1: 0.8333333333333334
- Weighted F1: 0.8239843785760546
- Macro Precision: 0.8988882462566673
- Micro Precision: 0.8333333333333334
- Weighted Precision: 0.8404982541824647
- Macro Recall: 0.7805142534864643
- Micro Recall: 0.8333333333333334
- Weighted Recall: 0.8333333333333334
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 29016523\n- CO2 Emissions (in grams): 3.273303707756322",
"## Validation Metrics\n\n- Loss: 0.6093757748603821\n- Accuracy: 0.8333333333333334\n- Macro F1: 0.7937936978656889\n- Micro F1: 0.8333333333333334\n- Weighted F1: 0.8239843785760546\n- Macro Precision: 0.8988882462566673\n- Micro Precision: 0.8333333333333334\n- Weighted Precision: 0.8404982541824647\n- Macro Recall: 0.7805142534864643\n- Micro Recall: 0.8333333333333334\n- Weighted Recall: 0.8333333333333334",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Jush/autonlp-data-bp #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 29016523\n- CO2 Emissions (in grams): 3.273303707756322",
"## Validation Metrics\n\n- Loss: 0.6093757748603821\n- Accuracy: 0.8333333333333334\n- Macro F1: 0.7937936978656889\n- Micro F1: 0.8333333333333334\n- Weighted F1: 0.8239843785760546\n- Macro Precision: 0.8988882462566673\n- Micro Precision: 0.8333333333333334\n- Weighted Precision: 0.8404982541824647\n- Macro Recall: 0.7805142534864643\n- Micro Recall: 0.8333333333333334\n- Weighted Recall: 0.8333333333333334",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
56,
43,
168,
16
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Jush/autonlp-data-bp #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 29016523\n- CO2 Emissions (in grams): 3.273303707756322## Validation Metrics\n\n- Loss: 0.6093757748603821\n- Accuracy: 0.8333333333333334\n- Macro F1: 0.7937936978656889\n- Micro F1: 0.8333333333333334\n- Weighted F1: 0.8239843785760546\n- Macro Precision: 0.8988882462566673\n- Micro Precision: 0.8333333333333334\n- Weighted Precision: 0.8404982541824647\n- Macro Recall: 0.7805142534864643\n- Micro Recall: 0.8333333333333334\n- Weighted Recall: 0.8333333333333334## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
fill-mask | transformers | FidicBERT is a pre-trained language model to analyze legal text. It is built by further training the Roberta language model in the legal domain, using an extensive legal and contract corpus and thereby fine-tuning for classifying and clustering contractual documents.
| {} | Jzz/FidicBERT | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| FidicBERT is a pre-trained language model to analyze legal text. It is built by further training the Roberta language model in the legal domain, using an extensive legal and contract corpus and thereby fine-tuning for classifying and clustering contractual documents.
| [] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
28
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
translation | transformers |
This model is finetuned from [mt5-base](https://huggingface.co/google/mt5-base).
The model vocabulary is trimmed to ~1/3 by selecting top 85000 tokens in the training data. The code to trim the vocabulary can be found [here](https://gist.github.com/K024/4a100a0f4f4b07208958e0f3244da6ad).
Usage:
```python
from transformers import (
T5Tokenizer,
MT5ForConditionalGeneration,
Text2TextGenerationPipeline,
)
path = "K024/mt5-zh-ja-en-trimmed"
pipe = Text2TextGenerationPipeline(
model=MT5ForConditionalGeneration.from_pretrained(path),
tokenizer=T5Tokenizer.from_pretrained(path),
)
sentence = "ja2zh: 吾輩は猫である。名前はまだ無い。"
res = pipe(sentence, max_length=100, num_beams=4)
res[0]['generated_text']
```
Training data:
```
wikimedia-en-ja
wikimedia-en-zh
wikimedia-ja-zh
wikititles-ja-en
wikititles-zh-en
wikimatrix-ja-zh
news-commentary-en-ja
news-commentary-en-zh
news-commentary-ja-zh
ted2020-en-ja
ted2020-en-zh
ted2020-ja-zh
```
License: [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
| {"language": ["zh", "ja", "en"], "license": "cc-by-nc-sa-4.0", "tags": ["translation"], "widget": [{"text": "ja2zh: \u543e\u8f29\u306f\u732b\u3067\u3042\u308b\u3002\u540d\u524d\u306f\u307e\u3060\u7121\u3044\u3002"}]} | K024/mt5-zh-ja-en-trimmed | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"zh",
"ja",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh",
"ja",
"en"
] | TAGS
#transformers #pytorch #mt5 #text2text-generation #translation #zh #ja #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
This model is finetuned from mt5-base.
The model vocabulary is trimmed to ~1/3 by selecting top 85000 tokens in the training data. The code to trim the vocabulary can be found here.
Usage:
Training data:
License: [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: URL
[cc-by-nc-sa-image]: URL
| [] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #translation #zh #ja #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] | [
64
] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #translation #zh #ja #en #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
null | null | yes | {} | K3LLiN/Kellin | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| yes | [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers |
#Rick DialoGPT Model | {"tags": ["conversational"]} | KAIHATSU/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Rick DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | transformers |
# Swedish BERT Models
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on approximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
- **bert-base-swedish-cased** (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
- **bert-base-swedish-cased-ner** (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
- **albert-base-swedish-cased-alpha** (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
## Files
| **name** | **files** |
|---------------------------------|-----------|
| bert-base-swedish-cased | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/vocab.txt), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/pytorch_model.bin) |
| bert-base-swedish-cased-ner | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/vocab.txt) [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/pytorch_model.bin) |
| albert-base-swedish-cased-alpha | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/config.json), [sentencepiece model](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/spiece.model), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/pytorch_model.bin) |
TensorFlow model weights will be released soon.
## Usage requirements / installation instructions
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the `do_lower_case` flag parameter set to `False` and `keep_accents` to `True` (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
```
# git clone https://github.com/Kungbib/swedish-bert-models
# cd swedish-bert-models
# python3 -m venv venv
# source venv/bin/activate
# pip install --upgrade pip
# pip install -r requirements.txt
```
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/bert-base-swedish-cased')
model = AutoModel.from_pretrained('KBLab/bert-base-swedish-cased')
```
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
```python
from transformers import pipeline
nlp = pipeline('ner', model='KB/bert-base-swedish-cased-ner', tokenizer='KB/bert-base-swedish-cased-ner')
nlp('Idag släpper KB tre språkmodeller.')
```
Running the Python code above should produce in something like the result below. Entity types used are `TME` for time, `PRS` for personal names, `LOC` for locations, `EVN` for events and `ORG` for organisations. These labels are subject to change.
```python
[ { 'word': 'Idag', 'score': 0.9998126029968262, 'entity': 'TME' },
{ 'word': 'KB', 'score': 0.9814832210540771, 'entity': 'ORG' } ]
```
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with `##`, for example the string `Engelbert kör Volvo till Herrängens fotbollsklubb` gets tokenized as `Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb`. To glue parts back together one can use something like this:
```python
text = 'Engelbert tar Volvon till Tele2 Arena för att titta på Djurgården IF ' +\
'som spelar fotboll i VM klockan två på kvällen.'
l = []
for token in nlp(text):
if token['word'].startswith('##'):
l[-1]['word'] += token['word'][2:]
else:
l += [ token ]
print(l)
```
Which should result in the following (though less cleanly formatted):
```python
[ { 'word': 'Engelbert', 'score': 0.99..., 'entity': 'PRS'},
{ 'word': 'Volvon', 'score': 0.99..., 'entity': 'OBJ'},
{ 'word': 'Tele2', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Arena', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Djurgården', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'IF', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'VM', 'score': 0.99..., 'entity': 'EVN'},
{ 'word': 'klockan', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'två', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'på', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'kvällen', 'score': 0.54..., 'entity': 'TME'} ]
```
### ALBERT base
The easiest way to do this is, again, using Huggingface Transformers:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/albert-base-swedish-cased-alpha'),
model = AutoModel.from_pretrained('KBLab/albert-base-swedish-cased-alpha')
```
## Acknowledgements ❤️
- Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
- Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
- Models are hosted on S3 by Huggingface 🤗
| {"language": "sv"} | KBLab/albert-base-swedish-cased-alpha | null | [
"transformers",
"pytorch",
"albert",
"sv",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #albert #sv #endpoints_compatible #region-us
| Swedish BERT Models
===================
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on approximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
* bert-base-swedish-cased (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
* bert-base-swedish-cased-ner (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
* albert-base-swedish-cased-alpha (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
Files
-----
TensorFlow model weights will be released soon.
Usage requirements / installation instructions
----------------------------------------------
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the 'do\_lower\_case' flag parameter set to 'False' and 'keep\_accents' to 'True' (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
Running the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:
Which should result in the following (though less cleanly formatted):
### ALBERT base
The easiest way to do this is, again, using Huggingface Transformers:
Acknowledgements ️
------------------
* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
* Models are hosted on S3 by Huggingface
| [
"### BERT Base Swedish\n\n\nA standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:",
"### BERT base fine-tuned for Swedish NER\n\n\nThis model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:\n\n\nRunning the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.\n\n\nThe BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:\n\n\nWhich should result in the following (though less cleanly formatted):",
"### ALBERT base\n\n\nThe easiest way to do this is, again, using Huggingface Transformers:\n\n\nAcknowledgements ️\n------------------\n\n\n* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.\n* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\n* Models are hosted on S3 by Huggingface"
] | [
"TAGS\n#transformers #pytorch #albert #sv #endpoints_compatible #region-us \n",
"### BERT Base Swedish\n\n\nA standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:",
"### BERT base fine-tuned for Swedish NER\n\n\nThis model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:\n\n\nRunning the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.\n\n\nThe BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:\n\n\nWhich should result in the following (though less cleanly formatted):",
"### ALBERT base\n\n\nThe easiest way to do this is, again, using Huggingface Transformers:\n\n\nAcknowledgements ️\n------------------\n\n\n* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.\n* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\n* Models are hosted on S3 by Huggingface"
] | [
21,
40,
233,
123
] | [
"TAGS\n#transformers #pytorch #albert #sv #endpoints_compatible #region-us \n### BERT Base Swedish\n\n\nA standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:### BERT base fine-tuned for Swedish NER\n\n\nThis model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:\n\n\nRunning the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.\n\n\nThe BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:\n\n\nWhich should result in the following (though less cleanly formatted):### ALBERT base\n\n\nThe easiest way to do this is, again, using Huggingface Transformers:\n\n\nAcknowledgements ️\n------------------\n\n\n* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.\n* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\n* Models are hosted on S3 by Huggingface"
] |
token-classification | transformers |
# Swedish BERT Models
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on approximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
- **bert-base-swedish-cased** (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
- **bert-base-swedish-cased-ner** (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
- **albert-base-swedish-cased-alpha** (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
## Files
| **name** | **files** |
|---------------------------------|-----------|
| bert-base-swedish-cased | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/vocab.txt), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/pytorch_model.bin) |
| bert-base-swedish-cased-ner | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/vocab.txt) [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/pytorch_model.bin) |
| albert-base-swedish-cased-alpha | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/config.json), [sentencepiece model](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/spiece.model), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/pytorch_model.bin) |
TensorFlow model weights will be released soon.
## Usage requirements / installation instructions
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the `do_lower_case` flag parameter set to `False` and `keep_accents` to `True` (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
```
# git clone https://github.com/Kungbib/swedish-bert-models
# cd swedish-bert-models
# python3 -m venv venv
# source venv/bin/activate
# pip install --upgrade pip
# pip install -r requirements.txt
```
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/bert-base-swedish-cased')
model = AutoModel.from_pretrained('KBLab/bert-base-swedish-cased')
```
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
```python
from transformers import pipeline
nlp = pipeline('ner', model='KBLab/bert-base-swedish-cased-ner', tokenizer='KBLab/bert-base-swedish-cased-ner')
nlp('Idag släpper KB tre språkmodeller.')
```
Running the Python code above should produce in something like the result below. Entity types used are `TME` for time, `PRS` for personal names, `LOC` for locations, `EVN` for events and `ORG` for organisations. These labels are subject to change.
```python
[ { 'word': 'Idag', 'score': 0.9998126029968262, 'entity': 'TME' },
{ 'word': 'KB', 'score': 0.9814832210540771, 'entity': 'ORG' } ]
```
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with `##`, for example the string `Engelbert kör Volvo till Herrängens fotbollsklubb` gets tokenized as `Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb`. To glue parts back together one can use something like this:
```python
text = 'Engelbert tar Volvon till Tele2 Arena för att titta på Djurgården IF ' +\
'som spelar fotboll i VM klockan två på kvällen.'
l = []
for token in nlp(text):
if token['word'].startswith('##'):
l[-1]['word'] += token['word'][2:]
else:
l += [ token ]
print(l)
```
Which should result in the following (though less cleanly formatted):
```python
[ { 'word': 'Engelbert', 'score': 0.99..., 'entity': 'PRS'},
{ 'word': 'Volvon', 'score': 0.99..., 'entity': 'OBJ'},
{ 'word': 'Tele2', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Arena', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Djurgården', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'IF', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'VM', 'score': 0.99..., 'entity': 'EVN'},
{ 'word': 'klockan', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'två', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'på', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'kvällen', 'score': 0.54..., 'entity': 'TME'} ]
```
### ALBERT base
The easiest way to do this is, again, using Huggingface Transformers:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/albert-base-swedish-cased-alpha'),
model = AutoModel.from_pretrained('KBLab/albert-base-swedish-cased-alpha')
```
## Acknowledgements ❤️
- Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
- Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
- Models are hosted on S3 by Huggingface 🤗
| {"language": "sv"} | KBLab/bert-base-swedish-cased-ner | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"token-classification",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #token-classification #sv #autotrain_compatible #endpoints_compatible #has_space #region-us
| Swedish BERT Models
===================
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on approximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
* bert-base-swedish-cased (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
* bert-base-swedish-cased-ner (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
* albert-base-swedish-cased-alpha (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
Files
-----
TensorFlow model weights will be released soon.
Usage requirements / installation instructions
----------------------------------------------
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the 'do\_lower\_case' flag parameter set to 'False' and 'keep\_accents' to 'True' (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
Running the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:
Which should result in the following (though less cleanly formatted):
### ALBERT base
The easiest way to do this is, again, using Huggingface Transformers:
Acknowledgements ️
------------------
* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
* Models are hosted on S3 by Huggingface
| [
"### BERT Base Swedish\n\n\nA standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:",
"### BERT base fine-tuned for Swedish NER\n\n\nThis model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:\n\n\nRunning the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.\n\n\nThe BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:\n\n\nWhich should result in the following (though less cleanly formatted):",
"### ALBERT base\n\n\nThe easiest way to do this is, again, using Huggingface Transformers:\n\n\nAcknowledgements ️\n------------------\n\n\n* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.\n* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\n* Models are hosted on S3 by Huggingface"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #token-classification #sv #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### BERT Base Swedish\n\n\nA standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:",
"### BERT base fine-tuned for Swedish NER\n\n\nThis model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:\n\n\nRunning the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.\n\n\nThe BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:\n\n\nWhich should result in the following (though less cleanly formatted):",
"### ALBERT base\n\n\nThe easiest way to do this is, again, using Huggingface Transformers:\n\n\nAcknowledgements ️\n------------------\n\n\n* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.\n* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\n* Models are hosted on S3 by Huggingface"
] | [
43,
40,
233,
123
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #token-classification #sv #autotrain_compatible #endpoints_compatible #has_space #region-us \n### BERT Base Swedish\n\n\nA standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:### BERT base fine-tuned for Swedish NER\n\n\nThis model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:\n\n\nRunning the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.\n\n\nThe BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:\n\n\nWhich should result in the following (though less cleanly formatted):### ALBERT base\n\n\nThe easiest way to do this is, again, using Huggingface Transformers:\n\n\nAcknowledgements ️\n------------------\n\n\n* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.\n* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\n* Models are hosted on S3 by Huggingface"
] |
fill-mask | transformers |
# Swedish BERT Models
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on aproximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
- **bert-base-swedish-cased** (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
- **bert-base-swedish-cased-ner** (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
- **albert-base-swedish-cased-alpha** (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
## Files
| **name** | **files** |
|---------------------------------|-----------|
| bert-base-swedish-cased | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/vocab.txt), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased/pytorch_model.bin) |
| bert-base-swedish-cased-ner | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/config.json), [vocab](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/vocab.txt) [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/bert-base-swedish-cased-ner/pytorch_model.bin) |
| albert-base-swedish-cased-alpha | [config](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/config.json), [sentencepiece model](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/spiece.model), [pytorch_model.bin](https://s3.amazonaws.com/models.huggingface.co/bert/KB/albert-base-swedish-cased-alpha/pytorch_model.bin) |
TensorFlow model weights will be released soon.
## Usage requirements / installation instructions
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the `do_lower_case` flag parameter set to `False` and `keep_accents` to `True` (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
```
# git clone https://github.com/Kungbib/swedish-bert-models
# cd swedish-bert-models
# python3 -m venv venv
# source venv/bin/activate
# pip install --upgrade pip
# pip install -r requirements.txt
```
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/bert-base-swedish-cased')
model = AutoModel.from_pretrained('KBLab/bert-base-swedish-cased')
```
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
```python
from transformers import pipeline
nlp = pipeline('ner', model='KB/bert-base-swedish-cased-ner', tokenizer='KB/bert-base-swedish-cased-ner')
nlp('Idag släpper KB tre språkmodeller.')
```
Running the Python code above should produce in something like the result below. Entity types used are `TME` for time, `PRS` for personal names, `LOC` for locations, `EVN` for events and `ORG` for organisations. These labels are subject to change.
```python
[ { 'word': 'Idag', 'score': 0.9998126029968262, 'entity': 'TME' },
{ 'word': 'KB', 'score': 0.9814832210540771, 'entity': 'ORG' } ]
```
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with `##`, for example the string `Engelbert kör Volvo till Herrängens fotbollsklubb` gets tokenized as `Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb`. To glue parts back together one can use something like this:
```python
text = 'Engelbert tar Volvon till Tele2 Arena för att titta på Djurgården IF ' +\
'som spelar fotboll i VM klockan två på kvällen.'
l = []
for token in nlp(text):
if token['word'].startswith('##'):
l[-1]['word'] += token['word'][2:]
else:
l += [ token ]
print(l)
```
Which should result in the following (though less cleanly formated):
```python
[ { 'word': 'Engelbert', 'score': 0.99..., 'entity': 'PRS'},
{ 'word': 'Volvon', 'score': 0.99..., 'entity': 'OBJ'},
{ 'word': 'Tele2', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Arena', 'score': 0.99..., 'entity': 'LOC'},
{ 'word': 'Djurgården', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'IF', 'score': 0.99..., 'entity': 'ORG'},
{ 'word': 'VM', 'score': 0.99..., 'entity': 'EVN'},
{ 'word': 'klockan', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'två', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'på', 'score': 0.99..., 'entity': 'TME'},
{ 'word': 'kvällen', 'score': 0.54..., 'entity': 'TME'} ]
```
### ALBERT base
The easisest way to do this is, again, using Huggingface Transformers:
```python
from transformers import AutoModel,AutoTokenizer
tok = AutoTokenizer.from_pretrained('KBLab/albert-base-swedish-cased-alpha'),
model = AutoModel.from_pretrained('KBLab/albert-base-swedish-cased-alpha')
```
## Acknowledgements ❤️
- Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
- Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
- Models are hosted on S3 by Huggingface 🤗
## Citation
https://arxiv.org/abs/2007.01658
```
@misc{malmsten2020playing,
title={Playing with Words at the National Library of Sweden -- Making a Swedish BERT},
author={Martin Malmsten and Love Börjeson and Chris Haffenden},
year={2020},
eprint={2007.01658},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| {"language": "sv", "arxiv": "https://arxiv.org/abs/2007.01658"} | KBLab/bert-base-swedish-cased | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"sv",
"arxiv:2007.01658",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2007.01658"
] | [
"sv"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #sv #arxiv-2007.01658 #autotrain_compatible #endpoints_compatible #region-us
| Swedish BERT Models
===================
The National Library of Sweden / KBLab releases three pretrained language models based on BERT and ALBERT. The models are trained on aproximately 15-20GB of text (200M sentences, 3000M tokens) from various sources (books, news, government publications, swedish wikipedia and internet forums) aiming to provide a representative BERT model for Swedish text. A more complete description will be published later on.
The following three models are currently available:
* bert-base-swedish-cased (*v1*) - A BERT trained with the same hyperparameters as first published by Google.
* bert-base-swedish-cased-ner (*experimental*) - a BERT fine-tuned for NER using SUC 3.0.
* albert-base-swedish-cased-alpha (*alpha*) - A first attempt at an ALBERT for Swedish.
All models are cased and trained with whole word masking.
Files
-----
TensorFlow model weights will be released soon.
Usage requirements / installation instructions
----------------------------------------------
The examples below require Huggingface Transformers 2.4.1 and Pytorch 1.3.1 or greater. For Transformers<2.4.0 the tokenizer must be instantiated manually and the 'do\_lower\_case' flag parameter set to 'False' and 'keep\_accents' to 'True' (for ALBERT).
To create an environment where the examples can be run, run the following in an terminal on your OS of choice.
### BERT Base Swedish
A standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:
### BERT base fine-tuned for Swedish NER
This model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:
Running the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.
The BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:
Which should result in the following (though less cleanly formated):
### ALBERT base
The easisest way to do this is, again, using Huggingface Transformers:
Acknowledgements ️
------------------
* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.
* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
* Models are hosted on S3 by Huggingface
URL
| [
"### BERT Base Swedish\n\n\nA standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:",
"### BERT base fine-tuned for Swedish NER\n\n\nThis model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:\n\n\nRunning the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.\n\n\nThe BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:\n\n\nWhich should result in the following (though less cleanly formated):",
"### ALBERT base\n\n\nThe easisest way to do this is, again, using Huggingface Transformers:\n\n\nAcknowledgements ️\n------------------\n\n\n* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.\n* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\n* Models are hosted on S3 by Huggingface\n\n\nURL"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #sv #arxiv-2007.01658 #autotrain_compatible #endpoints_compatible #region-us \n",
"### BERT Base Swedish\n\n\nA standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:",
"### BERT base fine-tuned for Swedish NER\n\n\nThis model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:\n\n\nRunning the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.\n\n\nThe BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:\n\n\nWhich should result in the following (though less cleanly formated):",
"### ALBERT base\n\n\nThe easisest way to do this is, again, using Huggingface Transformers:\n\n\nAcknowledgements ️\n------------------\n\n\n* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.\n* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\n* Models are hosted on S3 by Huggingface\n\n\nURL"
] | [
49,
40,
233,
127
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #sv #arxiv-2007.01658 #autotrain_compatible #endpoints_compatible #region-us \n### BERT Base Swedish\n\n\nA standard BERT base for Swedish trained on a variety of sources. Vocabulary size is ~50k. Using Huggingface Transformers the model can be loaded in Python as follows:### BERT base fine-tuned for Swedish NER\n\n\nThis model is fine-tuned on the SUC 3.0 dataset. Using the Huggingface pipeline the model can be easily instantiated. For Transformer<2.4.1 it seems the tokenizer must be loaded separately to disable lower-casing of input strings:\n\n\nRunning the Python code above should produce in something like the result below. Entity types used are 'TME' for time, 'PRS' for personal names, 'LOC' for locations, 'EVN' for events and 'ORG' for organisations. These labels are subject to change.\n\n\nThe BERT tokenizer often splits words into multiple tokens, with the subparts starting with '##', for example the string 'Engelbert kör Volvo till Herrängens fotbollsklubb' gets tokenized as 'Engel ##bert kör Volvo till Herr ##ängens fotbolls ##klubb'. To glue parts back together one can use something like this:\n\n\nWhich should result in the following (though less cleanly formated):### ALBERT base\n\n\nThe easisest way to do this is, again, using Huggingface Transformers:\n\n\nAcknowledgements ️\n------------------\n\n\n* Resources from Stockholms University, Umeå University and Swedish Language Bank at Gothenburg University were used when fine-tuning BERT for NER.\n* Model pretraining was made partly in-house at the KBLab and partly (for material without active copyright) with the support of Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\n* Models are hosted on S3 by Huggingface\n\n\nURL"
] |
automatic-speech-recognition | transformers |
Test | {"tags": ["automatic-speech-recognition", "generated_from_trainer", "asr_seq2seq"]} | KBLab/asr-voxrex-bart-base | null | [
"transformers",
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"asr_seq2seq",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #generated_from_trainer #asr_seq2seq #endpoints_compatible #region-us
|
Test | [] | [
"TAGS\n#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #generated_from_trainer #asr_seq2seq #endpoints_compatible #region-us \n"
] | [
47
] | [
"TAGS\n#transformers #pytorch #speech-encoder-decoder #automatic-speech-recognition #generated_from_trainer #asr_seq2seq #endpoints_compatible #region-us \n"
] |
text2text-generation | transformers |
## KB-BART
A [BART](https://arxiv.org/abs/1910.13461) model trained on a Swedish corpus consisting of 15 billion tokens (about 80GB of text). The model was trained with [Fairseq](https://github.com/pytorch/fairseq), and converted to be compatible with Huggingface.
Training code can be found [here](https://github.com/kb-labb/kb_bart).
## Usage
```python
from transformers import BartForConditionalGeneration, PreTrainedTokenizerFast, AutoTokenizer
model = BartForConditionalGeneration.from_pretrained("KBLab/bart-base-swedish-cased")
tok = AutoTokenizer.from_pretrained("KBLab/bart-base-swedish-cased")
model.eval()
input_ids = tok.encode(
"Jag har ätit en utsökt <mask> på restaurang vid <mask> .", return_tensors="pt"
)
# Simple greedy search
output_ids = model.generate(
input_ids,
min_length=15,
max_length=25,
num_beams=1,
do_sample=False,
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet på restaurang vid havet på restaurang vid havet.</s>'
# Sampling
output_ids = model.generate(
input_ids,
min_length=15,
max_length=20,
num_beams=1,
do_sample=True,
)
tok.decode(output_ids[0])
#'</s><s> Jag har ätit en utsökt god mat som de tagit in på restaurang vid avröjda</s>'
# Beam search
output_ids = model.generate(
input_ids,
min_length=15,
max_length=25,
no_repeat_ngram_size=3,
num_beams=8,
early_stopping=True,
do_sample=True,
num_return_sequences=6
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet. Jag har varit ute och gått en sväng.</s><pad><pad>'
# Diverse beam generation
output_ids = model.generate(
input_ids,
min_length=50,
max_length=100,
no_repeat_ngram_size=3,
num_beams=8,
early_stopping=True,
do_sample=False,
num_return_sequences=6,
num_beam_groups=8,
diversity_penalty=2.0,
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet på restaurang. Jag har varit på restaurang i två dagar... Jag..,..!!!.. Så.. Nu.. Hej.. Vi.. Här.</s>'
```
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium ([www.hpc-rivr.si](https://www.hpc-rivr.si/)) and EuroHPC JU ([eurohpc-ju.europa.eu/](https://eurohpc-ju.europa.eu/)) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science ([www.izum.si](https://www.izum.si/)). | {"language": "sv", "widget": [{"text": "Jag har \u00e4tit en <mask>"}]} | KBLab/bart-base-swedish-cased | null | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"sv",
"arxiv:1910.13461",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.13461"
] | [
"sv"
] | TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #sv #arxiv-1910.13461 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## KB-BART
A BART model trained on a Swedish corpus consisting of 15 billion tokens (about 80GB of text). The model was trained with Fairseq, and converted to be compatible with Huggingface.
Training code can be found here.
## Usage
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL). | [
"## KB-BART\n\nA BART model trained on a Swedish corpus consisting of 15 billion tokens (about 80GB of text). The model was trained with Fairseq, and converted to be compatible with Huggingface. \n\nTraining code can be found here.",
"## Usage",
"## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #sv #arxiv-1910.13461 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## KB-BART\n\nA BART model trained on a Swedish corpus consisting of 15 billion tokens (about 80GB of text). The model was trained with Fairseq, and converted to be compatible with Huggingface. \n\nTraining code can be found here.",
"## Usage",
"## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
50,
52,
3,
51
] | [
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #sv #arxiv-1910.13461 #autotrain_compatible #endpoints_compatible #has_space #region-us \n## KB-BART\n\nA BART model trained on a Swedish corpus consisting of 15 billion tokens (about 80GB of text). The model was trained with Fairseq, and converted to be compatible with Huggingface. \n\nTraining code can be found here.## Usage## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] |
fill-mask | transformers |
# 🤗 BERT Swedish
This BERT model was trained using the 🤗 transformers library.
The size of the model is a regular BERT-base with 110M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
To avoid excessive padding documents shorter than 512 tokens were concatenated into one large sequence of 512 tokens, and larger documents were split into multiple 512 token sequences, following https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py
Training was done for a bit more than 8 epochs with a batch size of 2048, resulting in a little less than 125k training steps.
The model has three sister models trained on the same dataset:
- [Megatron-BERT-base-125k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k)
- [Megatron-BERT-base-600k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k)
- [Megatron-BERT-large-110k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-110k)
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si). | {"language": ["sv"]} | KBLab/bert-base-swedish-cased-new | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #safetensors #bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us
|
# BERT Swedish
This BERT model was trained using the transformers library.
The size of the model is a regular BERT-base with 110M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
To avoid excessive padding documents shorter than 512 tokens were concatenated into one large sequence of 512 tokens, and larger documents were split into multiple 512 token sequences, following URL
Training was done for a bit more than 8 epochs with a batch size of 2048, resulting in a little less than 125k training steps.
The model has three sister models trained on the same dataset:
- Megatron-BERT-base-125k
- Megatron-BERT-base-600k
- Megatron-BERT-large-110k
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL). | [
"# BERT Swedish\n\nThis BERT model was trained using the transformers library.\nThe size of the model is a regular BERT-base with 110M parameters.\nThe model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.\nTo avoid excessive padding documents shorter than 512 tokens were concatenated into one large sequence of 512 tokens, and larger documents were split into multiple 512 token sequences, following URL\n\nTraining was done for a bit more than 8 epochs with a batch size of 2048, resulting in a little less than 125k training steps.\n\nThe model has three sister models trained on the same dataset:\n- Megatron-BERT-base-125k\n- Megatron-BERT-base-600k\n- Megatron-BERT-large-110k",
"## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT Swedish\n\nThis BERT model was trained using the transformers library.\nThe size of the model is a regular BERT-base with 110M parameters.\nThe model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.\nTo avoid excessive padding documents shorter than 512 tokens were concatenated into one large sequence of 512 tokens, and larger documents were split into multiple 512 token sequences, following URL\n\nTraining was done for a bit more than 8 epochs with a batch size of 2048, resulting in a little less than 125k training steps.\n\nThe model has three sister models trained on the same dataset:\n- Megatron-BERT-base-125k\n- Megatron-BERT-base-600k\n- Megatron-BERT-large-110k",
"## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
34,
167,
52
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us \n# BERT Swedish\n\nThis BERT model was trained using the transformers library.\nThe size of the model is a regular BERT-base with 110M parameters.\nThe model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.\nTo avoid excessive padding documents shorter than 512 tokens were concatenated into one large sequence of 512 tokens, and larger documents were split into multiple 512 token sequences, following URL\n\nTraining was done for a bit more than 8 epochs with a batch size of 2048, resulting in a little less than 125k training steps.\n\nThe model has three sister models trained on the same dataset:\n- Megatron-BERT-base-125k\n- Megatron-BERT-base-600k\n- Megatron-BERT-large-110k## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] |
token-classification | transformers |
# KB-BERT for NER
## Cased data
This model is based on [KB-BERT](https://huggingface.co/KB/bert-base-swedish-cased) and was fine-tuned on the [SUCX 3.0 - NER](https://huggingface.co/datasets/KBLab/sucx3_ner) corpus, using the _simple_ tags and cased data.
For this model we used a variation of the data that did **not** use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.
The model was trained on the training data only, with the best model chosen by its performance on the validation data.
You find more information about the model and the performance on our blog: https://kb-labb.github.io/posts/2022-02-07-sucx3_ner | {"language": "sv", "tags": ["token-classification", "sequence-tagger-model", "bert"], "datasets": ["KBLab/sucx3_ner"], "widget": [{"text": "Emil bor i L\u00f6nneberga"}]} | KBLab/bert-base-swedish-cased-reallysimple-ner | null | [
"transformers",
"pytorch",
"megatron-bert",
"token-classification",
"sequence-tagger-model",
"bert",
"sv",
"dataset:KBLab/sucx3_ner",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #megatron-bert #token-classification #sequence-tagger-model #bert #sv #dataset-KBLab/sucx3_ner #autotrain_compatible #endpoints_compatible #region-us
|
# KB-BERT for NER
## Cased data
This model is based on KB-BERT and was fine-tuned on the SUCX 3.0 - NER corpus, using the _simple_ tags and cased data.
For this model we used a variation of the data that did not use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.
The model was trained on the training data only, with the best model chosen by its performance on the validation data.
You find more information about the model and the performance on our blog: URL | [
"# KB-BERT for NER",
"## Cased data\n\nThis model is based on KB-BERT and was fine-tuned on the SUCX 3.0 - NER corpus, using the _simple_ tags and cased data.\nFor this model we used a variation of the data that did not use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.\n\nThe model was trained on the training data only, with the best model chosen by its performance on the validation data.\nYou find more information about the model and the performance on our blog: URL"
] | [
"TAGS\n#transformers #pytorch #megatron-bert #token-classification #sequence-tagger-model #bert #sv #dataset-KBLab/sucx3_ner #autotrain_compatible #endpoints_compatible #region-us \n",
"# KB-BERT for NER",
"## Cased data\n\nThis model is based on KB-BERT and was fine-tuned on the SUCX 3.0 - NER corpus, using the _simple_ tags and cased data.\nFor this model we used a variation of the data that did not use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.\n\nThe model was trained on the training data only, with the best model chosen by its performance on the validation data.\nYou find more information about the model and the performance on our blog: URL"
] | [
56,
7,
117
] | [
"TAGS\n#transformers #pytorch #megatron-bert #token-classification #sequence-tagger-model #bert #sv #dataset-KBLab/sucx3_ner #autotrain_compatible #endpoints_compatible #region-us \n# KB-BERT for NER## Cased data\n\nThis model is based on KB-BERT and was fine-tuned on the SUCX 3.0 - NER corpus, using the _simple_ tags and cased data.\nFor this model we used a variation of the data that did not use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.\n\nThe model was trained on the training data only, with the best model chosen by its performance on the validation data.\nYou find more information about the model and the performance on our blog: URL"
] |
token-classification | transformers |
# KB-BERT for NER
## Mixed cased and uncased data
This model is based on [KB-BERT](https://huggingface.co/KB/bert-base-swedish-cased) and was fine-tuned on the [SUCX 3.0 - NER](https://huggingface.co/datasets/KBLab/sucx3_ner) corpus, using the _simple_ tags and partially lowercased data.
For this model we used a variation of the data that did **not** use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.
The model was trained on the training data only, with the best model chosen by its performance on the validation data.
You find more information about the model and the performance on our blog: https://kb-labb.github.io/posts/2022-02-07-sucx3_ner | {"language": "sv", "tags": ["token-classification", "sequence-tagger-model", "bert"], "datasets": ["KBLab/sucx3_ner"], "model": ["KB/bert-base-swedish-cased"], "widget": [{"text": "Emil bor i L\u00f6nneberga"}]} | KBLab/bert-base-swedish-lowermix-reallysimple-ner | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"sequence-tagger-model",
"sv",
"dataset:KBLab/sucx3_ner",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #safetensors #bert #token-classification #sequence-tagger-model #sv #dataset-KBLab/sucx3_ner #autotrain_compatible #endpoints_compatible #region-us
|
# KB-BERT for NER
## Mixed cased and uncased data
This model is based on KB-BERT and was fine-tuned on the SUCX 3.0 - NER corpus, using the _simple_ tags and partially lowercased data.
For this model we used a variation of the data that did not use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.
The model was trained on the training data only, with the best model chosen by its performance on the validation data.
You find more information about the model and the performance on our blog: URL | [
"# KB-BERT for NER",
"## Mixed cased and uncased data\n\nThis model is based on KB-BERT and was fine-tuned on the SUCX 3.0 - NER corpus, using the _simple_ tags and partially lowercased data.\nFor this model we used a variation of the data that did not use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.\n\nThe model was trained on the training data only, with the best model chosen by its performance on the validation data.\nYou find more information about the model and the performance on our blog: URL"
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #sequence-tagger-model #sv #dataset-KBLab/sucx3_ner #autotrain_compatible #endpoints_compatible #region-us \n",
"# KB-BERT for NER",
"## Mixed cased and uncased data\n\nThis model is based on KB-BERT and was fine-tuned on the SUCX 3.0 - NER corpus, using the _simple_ tags and partially lowercased data.\nFor this model we used a variation of the data that did not use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.\n\nThe model was trained on the training data only, with the best model chosen by its performance on the validation data.\nYou find more information about the model and the performance on our blog: URL"
] | [
55,
7,
122
] | [
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #sequence-tagger-model #sv #dataset-KBLab/sucx3_ner #autotrain_compatible #endpoints_compatible #region-us \n# KB-BERT for NER## Mixed cased and uncased data\n\nThis model is based on KB-BERT and was fine-tuned on the SUCX 3.0 - NER corpus, using the _simple_ tags and partially lowercased data.\nFor this model we used a variation of the data that did not use BIO-encoding to differentiate between the beginnings (B), and insides (I) of named entity tags.\n\nThe model was trained on the training data only, with the best model chosen by its performance on the validation data.\nYou find more information about the model and the performance on our blog: URL"
] |
fill-mask | transformers |
# Megatron-BERT-base Swedish 600k
This BERT model was trained using the Megatron-LM library.
The size of the model is a regular BERT-base with 110M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 600k training steps. Its [sister model](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k) used the same setup, but was instead trained for only 125k steps.
The model has three sister models trained on the same dataset:
- [🤗 BERT Swedish](https://huggingface.co/KBLab/bert-base-swedish-cased-new)
- [Megatron-BERT-base-125k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k)
- [Megatron-BERT-large-110k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-110k)
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si). | {"language": ["sv"]} | KBLab/megatron-bert-base-swedish-cased-600k | null | [
"transformers",
"pytorch",
"safetensors",
"megatron-bert",
"fill-mask",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #safetensors #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us
|
# Megatron-BERT-base Swedish 600k
This BERT model was trained using the Megatron-LM library.
The size of the model is a regular BERT-base with 110M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 600k training steps. Its sister model used the same setup, but was instead trained for only 125k steps.
The model has three sister models trained on the same dataset:
- BERT Swedish
- Megatron-BERT-base-125k
- Megatron-BERT-large-110k
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL). | [
"# Megatron-BERT-base Swedish 600k\n\nThis BERT model was trained using the Megatron-LM library.\nThe size of the model is a regular BERT-base with 110M parameters.\nThe model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.\n\nTraining was done for 600k training steps. Its sister model used the same setup, but was instead trained for only 125k steps.\n\n\nThe model has three sister models trained on the same dataset:\n- BERT Swedish\n- Megatron-BERT-base-125k\n- Megatron-BERT-large-110k",
"## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
"TAGS\n#transformers #pytorch #safetensors #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us \n",
"# Megatron-BERT-base Swedish 600k\n\nThis BERT model was trained using the Megatron-LM library.\nThe size of the model is a regular BERT-base with 110M parameters.\nThe model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.\n\nTraining was done for 600k training steps. Its sister model used the same setup, but was instead trained for only 125k steps.\n\n\nThe model has three sister models trained on the same dataset:\n- BERT Swedish\n- Megatron-BERT-base-125k\n- Megatron-BERT-large-110k",
"## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
37,
130,
52
] | [
"TAGS\n#transformers #pytorch #safetensors #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us \n# Megatron-BERT-base Swedish 600k\n\nThis BERT model was trained using the Megatron-LM library.\nThe size of the model is a regular BERT-base with 110M parameters.\nThe model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.\n\nTraining was done for 600k training steps. Its sister model used the same setup, but was instead trained for only 125k steps.\n\n\nThe model has three sister models trained on the same dataset:\n- BERT Swedish\n- Megatron-BERT-base-125k\n- Megatron-BERT-large-110k## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] |
fill-mask | transformers |
# Megatron-BERT-base Swedish 125k
This BERT model was trained using the Megatron-LM library.
The size of the model is a regular BERT-base with 110M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 125k training steps. Its [sister model](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k) used the same setup, but was instead trained for 600k steps.
The model has three sister models trained on the same dataset:
- [🤗 BERT Swedish](https://huggingface.co/KBLab/bert-base-swedish-cased-new)
- [Megatron-BERT-base-600k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k)
- [Megatron-BERT-large-110k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-110k)
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si). | {"language": ["sv"]} | KBLab/megatron-bert-base-swedish-cased-125k | null | [
"transformers",
"pytorch",
"safetensors",
"megatron-bert",
"fill-mask",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #safetensors #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us
|
# Megatron-BERT-base Swedish 125k
This BERT model was trained using the Megatron-LM library.
The size of the model is a regular BERT-base with 110M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 125k training steps. Its sister model used the same setup, but was instead trained for 600k steps.
The model has three sister models trained on the same dataset:
- BERT Swedish
- Megatron-BERT-base-600k
- Megatron-BERT-large-110k
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL). | [
"# Megatron-BERT-base Swedish 125k\n\nThis BERT model was trained using the Megatron-LM library.\nThe size of the model is a regular BERT-base with 110M parameters.\nThe model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.\n\nTraining was done for 125k training steps. Its sister model used the same setup, but was instead trained for 600k steps.\n\n\nThe model has three sister models trained on the same dataset:\n- BERT Swedish\n- Megatron-BERT-base-600k\n- Megatron-BERT-large-110k",
"## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
"TAGS\n#transformers #pytorch #safetensors #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us \n",
"# Megatron-BERT-base Swedish 125k\n\nThis BERT model was trained using the Megatron-LM library.\nThe size of the model is a regular BERT-base with 110M parameters.\nThe model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.\n\nTraining was done for 125k training steps. Its sister model used the same setup, but was instead trained for 600k steps.\n\n\nThe model has three sister models trained on the same dataset:\n- BERT Swedish\n- Megatron-BERT-base-600k\n- Megatron-BERT-large-110k",
"## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
37,
129,
52
] | [
"TAGS\n#transformers #pytorch #safetensors #megatron-bert #fill-mask #sv #autotrain_compatible #endpoints_compatible #region-us \n# Megatron-BERT-base Swedish 125k\n\nThis BERT model was trained using the Megatron-LM library.\nThe size of the model is a regular BERT-base with 110M parameters.\nThe model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.\n\nTraining was done for 125k training steps. Its sister model used the same setup, but was instead trained for 600k steps.\n\n\nThe model has three sister models trained on the same dataset:\n- BERT Swedish\n- Megatron-BERT-base-600k\n- Megatron-BERT-large-110k## Acknowledgements\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] |
fill-mask | transformers | # Roberta base TEST | {} | KBLab/roberta-base-swedish-cased | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| # Roberta base TEST | [
"# Roberta base TEST"
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# Roberta base TEST"
] | [
28,
4
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n# Roberta base TEST"
] |
sentence-similarity | sentence-transformers |
# KBLab/sentence-bert-swedish-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps Swedish sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model is a bilingual Swedish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)) as a teacher model, and the pretrained Swedish [KB-BERT](https://huggingface.co/KB/bert-base-swedish-cased) as the student model.
A more detailed description of the model can be found in an article we published on the KBLab blog [here](https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/) and for the updated model [here](https://kb-labb.github.io/posts/2023-01-16-sentence-transformer-20/).
**Update**: We have released updated versions of the model since the initial release. The original model described in the blog post is **v1.0**. The current version is **v2.0**. The newer versions are trained on longer paragraphs, and have a longer max sequence length. **v2.0** is trained with a stronger teacher model and is the current default.
| Model version | Teacher Model | Max Sequence Length |
|---------------|---------|----------|
| v1.0 | [paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) | 256 |
| v1.1 | [paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) | 384 |
| v2.0 | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 384 |
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Det här är en exempelmening", "Varje exempel blir konverterad"]
model = SentenceTransformer('KBLab/sentence-bert-swedish-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
### Loading an older model version (Sentence-Transformers)
Currently, the easiest way to load an older model version is to clone the model repository and load it from disk. For example, to clone the **v1.0** model:
```bash
git clone --depth 1 --branch v1.0 https://huggingface.co/KBLab/sentence-bert-swedish-cased
```
Then you can load the model by pointing to the local folder where you cloned the model:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("path_to_model_folder/sentence-bert-swedish-cased")
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['Det här är en exempelmening', 'Varje exempel blir konverterad']
# Load model from HuggingFace Hub
# To load an older version, e.g. v1.0, add the argument revision="v1.0"
tokenizer = AutoTokenizer.from_pretrained('KBLab/sentence-bert-swedish-cased')
model = AutoModel.from_pretrained('KBLab/sentence-bert-swedish-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
### Loading an older model (Hugginfface Transformers)
To load an older model specify the version tag with the `revision` arg. For example, to load the **v1.0** model, use the following code:
```python
AutoTokenizer.from_pretrained('KBLab/sentence-bert-swedish-cased', revision="v1.0")
AutoModel.from_pretrained('KBLab/sentence-bert-swedish-cased', revision="v1.0")
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
The model was evaluated on [SweParaphrase v1.0](https://spraakbanken.gu.se/en/resources/sweparaphrase) and **SweParaphrase v2.0**. This test set is part of [SuperLim](https://spraakbanken.gu.se/en/resources/superlim) -- a Swedish evaluation suite for natural langage understanding tasks. We calculated Pearson and Spearman correlation between predicted model similarity scores and the human similarity score labels. Results from **SweParaphrase v1.0** are displayed below.
| Model version | Pearson | Spearman |
|---------------|---------|----------|
| v1.0 | 0.9183 | 0.9114 |
| v1.1 | 0.9183 | 0.9114 |
| v2.0 | **0.9283** | **0.9130** |
The following code snippet can be used to reproduce the above results:
```python
from sentence_transformers import SentenceTransformer
import pandas as pd
df = pd.read_csv(
"sweparaphrase-dev-165.csv",
sep="\t",
header=None,
names=[
"original_id",
"source",
"type",
"sentence_swe1",
"sentence_swe2",
"score",
"sentence1",
"sentence2",
],
)
model = SentenceTransformer("KBLab/sentence-bert-swedish-cased")
sentences1 = df["sentence_swe1"].tolist()
sentences2 = df["sentence_swe2"].tolist()
# Compute embedding for both lists
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)
# Compute cosine similarity after normalizing
embeddings1 /= embeddings1.norm(dim=-1, keepdim=True)
embeddings2 /= embeddings2.norm(dim=-1, keepdim=True)
cosine_scores = embeddings1 @ embeddings2.t()
sentence_pair_scores = cosine_scores.diag()
df["model_score"] = sentence_pair_scores.cpu().tolist()
print(df[["score", "model_score"]].corr(method="spearman"))
print(df[["score", "model_score"]].corr(method="pearson"))
```
### Sweparaphrase v2.0
In general, **v1.1** correlates the most with human assessment of text similarity on SweParaphrase v2.0. Below, we present zero-shot evaluation results on all data splits. They display the model's performance out of the box, without any fine-tuning.
| Model version | Data split | Pearson | Spearman |
|---------------|------------|------------|------------|
| v1.0 | train | 0.8355 | 0.8256 |
| v1.1 | train | **0.8383** | **0.8302** |
| v2.0 | train | 0.8209 | 0.8059 |
| v1.0 | dev | 0.8682 | 0.8774 |
| v1.1 | dev | **0.8739** | **0.8833** |
| v2.0 | dev | 0.8638 | 0.8668 |
| v1.0 | test | 0.8356 | 0.8476 |
| v1.1 | test | **0.8393** | **0.8550** |
| v2.0 | test | 0.8232 | 0.8213 |
### SweFAQ v2.0
When it comes to retrieval tasks, **v2.0** performs the best by quite a substantial margin. It is better at matching the correct answer to a question compared to v1.1 and v1.0.
| Model version | Data split | Accuracy |
|---------------|------------|------------|
| v1.0 | train | 0.5262 |
| v1.1 | train | 0.6236 |
| v2.0 | train | **0.7106** |
| v1.0 | dev | 0.4636 |
| v1.1 | dev | 0.5818 |
| v2.0 | dev | **0.6727** |
| v1.0 | test | 0.4495 |
| v1.1 | test | 0.5229 |
| v2.0 | test | **0.5871** |
Examples how to evaluate the models on some of the test sets of the SuperLim suites can be found on the following links: [evaluate_faq.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_faq.py) (Swedish FAQ), [evaluate_swesat.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_swesat.py) (SweSAT synonyms), [evaluate_supersim.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_supersim.py) (SuperSim).
## Training
An article with more details on data and v1.0 of the model can be found on the [KBLab blog](https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/).
Around 14.6 million sentences from English-Swedish parallel corpuses were used to train the model. Data was sourced from the [Open Parallel Corpus](https://opus.nlpl.eu/) (OPUS) and downloaded via the python package [opustools](https://pypi.org/project/opustools/). Datasets used were: JW300, Europarl, DGT-TM, EMEA, ELITR-ECA, TED2020, Tatoeba and OpenSubtitles.
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 180513 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 8e-06
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 5000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
This model was trained by KBLab, a data lab at the National Library of Sweden.
You can cite the article on our blog: https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/ .
```
@misc{rekathati2021introducing,
author = {Rekathati, Faton},
title = {The KBLab Blog: Introducing a Swedish Sentence Transformer},
url = {https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/},
year = {2021}
}
```
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium ([www.hpc-rivr.si](https://www.hpc-rivr.si/)) and EuroHPC JU ([eurohpc-ju.europa.eu/](https://eurohpc-ju.europa.eu/)) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science ([www.izum.si](https://www.izum.si/)). | {"language": ["sv"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity", "lang": ["sv"], "widget": [{"source_sentence": "Mannen \u00e5t mat.", "sentences": ["Han f\u00f6rt\u00e4rde en n\u00e4rande och nyttig m\u00e5ltid.", "Det var ett sunkigt hak med ganska gott k\u00e4k.", "Han inmundigade middagen tillsammans med ett glas r\u00f6dvin.", "Potatischips \u00e4r j\u00e4ttegoda.", "Tryck p\u00e5 knappen f\u00f6r att f\u00e5 tala med kundsupporten."], "example_title": "Mat"}, {"source_sentence": "Kan jag deklarera digitalt fr\u00e5n utlandet?", "sentences": ["Du som befinner dig i utlandet kan deklarera digitalt p\u00e5 flera olika s\u00e4tt.", "Du som har kvarskatt att betala ska g\u00f6ra en inbetalning till ditt skattekonto.", "Efter att du har deklarerat g\u00e5r vi igenom uppgifterna i din deklaration och r\u00e4knar ut din skatt.", "I din deklaration som du f\u00e5r fr\u00e5n oss har vi r\u00e4knat ut vad du ska betala eller f\u00e5 tillbaka.", "Tryck p\u00e5 knappen f\u00f6r att f\u00e5 tala med kundsupporten."], "example_title": "Skatteverket FAQ"}, {"source_sentence": "Hon kunde g\u00f6ra bak\u00e5tvolter.", "sentences": ["Hon var atletisk.", "Hon var bra p\u00e5 gymnastik.", "Hon var inte atletisk.", "Hon var of\u00f6rm\u00f6gen att flippa bakl\u00e4nges."], "example_title": "Gymnastik"}]} | KBLab/sentence-bert-swedish-cased | null | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"sv",
"arxiv:2004.09813",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.09813"
] | [
"sv"
] | TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #sv #arxiv-2004.09813 #license-apache-2.0 #endpoints_compatible #has_space #region-us
| KBLab/sentence-bert-swedish-cased
=================================
This is a sentence-transformers model: It maps Swedish sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model is a bilingual Swedish-English model trained according to instructions in the paper Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation and the documentation accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder (all-mpnet-base-v2) as a teacher model, and the pretrained Swedish KB-BERT as the student model.
A more detailed description of the model can be found in an article we published on the KBLab blog here and for the updated model here.
Update: We have released updated versions of the model since the initial release. The original model described in the blog post is v1.0. The current version is v2.0. The newer versions are trained on longer paragraphs, and have a longer max sequence length. v2.0 is trained with a stronger teacher model and is the current default.
Model version: v1.0, Teacher Model: paraphrase-mpnet-base-v2, Max Sequence Length: 256
Model version: v1.1, Teacher Model: paraphrase-mpnet-base-v2, Max Sequence Length: 384
Model version: v2.0, Teacher Model: all-mpnet-base-v2, Max Sequence Length: 384
Usage (Sentence-Transformers)
-----------------------------
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
### Loading an older model version (Sentence-Transformers)
Currently, the easiest way to load an older model version is to clone the model repository and load it from disk. For example, to clone the v1.0 model:
Then you can load the model by pointing to the local folder where you cloned the model:
Usage (HuggingFace Transformers)
--------------------------------
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
### Loading an older model (Hugginfface Transformers)
To load an older model specify the version tag with the 'revision' arg. For example, to load the v1.0 model, use the following code:
Evaluation Results
------------------
The model was evaluated on SweParaphrase v1.0 and SweParaphrase v2.0. This test set is part of SuperLim -- a Swedish evaluation suite for natural langage understanding tasks. We calculated Pearson and Spearman correlation between predicted model similarity scores and the human similarity score labels. Results from SweParaphrase v1.0 are displayed below.
Model version: v1.0, Pearson: 0.9183, Spearman: 0.9114
Model version: v1.1, Pearson: 0.9183, Spearman: 0.9114
Model version: v2.0, Pearson: 0.9283, Spearman: 0.9130
The following code snippet can be used to reproduce the above results:
### Sweparaphrase v2.0
In general, v1.1 correlates the most with human assessment of text similarity on SweParaphrase v2.0. Below, we present zero-shot evaluation results on all data splits. They display the model's performance out of the box, without any fine-tuning.
### SweFAQ v2.0
When it comes to retrieval tasks, v2.0 performs the best by quite a substantial margin. It is better at matching the correct answer to a question compared to v1.1 and v1.0.
Model version: v1.0, Data split: train, Accuracy: 0.5262
Model version: v1.1, Data split: train, Accuracy: 0.6236
Model version: v2.0, Data split: train, Accuracy: 0.7106
Model version: v1.0, Data split: dev, Accuracy: 0.4636
Model version: v1.1, Data split: dev, Accuracy: 0.5818
Model version: v2.0, Data split: dev, Accuracy: 0.6727
Model version: v1.0, Data split: test, Accuracy: 0.4495
Model version: v1.1, Data split: test, Accuracy: 0.5229
Model version: v2.0, Data split: test, Accuracy: 0.5871
Examples how to evaluate the models on some of the test sets of the SuperLim suites can be found on the following links: evaluate\_faq.py (Swedish FAQ), evaluate\_swesat.py (SweSAT synonyms), evaluate\_supersim.py (SuperSim).
Training
--------
An article with more details on data and v1.0 of the model can be found on the KBLab blog.
Around 14.6 million sentences from English-Swedish parallel corpuses were used to train the model. Data was sourced from the Open Parallel Corpus (OPUS) and downloaded via the python package opustools. Datasets used were: JW300, Europarl, DGT-TM, EMEA, ELITR-ECA, TED2020, Tatoeba and OpenSubtitles.
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 180513 with parameters:
Loss:
'sentence\_transformers.losses.MSELoss.MSELoss'
Parameters of the fit()-Method:
Full Model Architecture
-----------------------
Citing & Authors
----------------
This model was trained by KBLab, a data lab at the National Library of Sweden.
You can cite the article on our blog: URL .
Acknowledgements
----------------
We gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL).
| [
"### Loading an older model version (Sentence-Transformers)\n\n\nCurrently, the easiest way to load an older model version is to clone the model repository and load it from disk. For example, to clone the v1.0 model:\n\n\nThen you can load the model by pointing to the local folder where you cloned the model:\n\n\nUsage (HuggingFace Transformers)\n--------------------------------\n\n\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"### Loading an older model (Hugginfface Transformers)\n\n\nTo load an older model specify the version tag with the 'revision' arg. For example, to load the v1.0 model, use the following code:\n\n\nEvaluation Results\n------------------\n\n\nThe model was evaluated on SweParaphrase v1.0 and SweParaphrase v2.0. This test set is part of SuperLim -- a Swedish evaluation suite for natural langage understanding tasks. We calculated Pearson and Spearman correlation between predicted model similarity scores and the human similarity score labels. Results from SweParaphrase v1.0 are displayed below.\n\n\nModel version: v1.0, Pearson: 0.9183, Spearman: 0.9114\nModel version: v1.1, Pearson: 0.9183, Spearman: 0.9114\nModel version: v2.0, Pearson: 0.9283, Spearman: 0.9130\n\n\nThe following code snippet can be used to reproduce the above results:",
"### Sweparaphrase v2.0\n\n\nIn general, v1.1 correlates the most with human assessment of text similarity on SweParaphrase v2.0. Below, we present zero-shot evaluation results on all data splits. They display the model's performance out of the box, without any fine-tuning.",
"### SweFAQ v2.0\n\n\nWhen it comes to retrieval tasks, v2.0 performs the best by quite a substantial margin. It is better at matching the correct answer to a question compared to v1.1 and v1.0.\n\n\nModel version: v1.0, Data split: train, Accuracy: 0.5262\nModel version: v1.1, Data split: train, Accuracy: 0.6236\nModel version: v2.0, Data split: train, Accuracy: 0.7106\nModel version: v1.0, Data split: dev, Accuracy: 0.4636\nModel version: v1.1, Data split: dev, Accuracy: 0.5818\nModel version: v2.0, Data split: dev, Accuracy: 0.6727\nModel version: v1.0, Data split: test, Accuracy: 0.4495\nModel version: v1.1, Data split: test, Accuracy: 0.5229\nModel version: v2.0, Data split: test, Accuracy: 0.5871\n\n\nExamples how to evaluate the models on some of the test sets of the SuperLim suites can be found on the following links: evaluate\\_faq.py (Swedish FAQ), evaluate\\_swesat.py (SweSAT synonyms), evaluate\\_supersim.py (SuperSim).\n\n\nTraining\n--------\n\n\nAn article with more details on data and v1.0 of the model can be found on the KBLab blog.\n\n\nAround 14.6 million sentences from English-Swedish parallel corpuses were used to train the model. Data was sourced from the Open Parallel Corpus (OPUS) and downloaded via the python package opustools. Datasets used were: JW300, Europarl, DGT-TM, EMEA, ELITR-ECA, TED2020, Tatoeba and OpenSubtitles.\n\n\nThe model was trained with the parameters:\n\n\nDataLoader:\n\n\n'URL.dataloader.DataLoader' of length 180513 with parameters:\n\n\nLoss:\n\n\n'sentence\\_transformers.losses.MSELoss.MSELoss'\n\n\nParameters of the fit()-Method:\n\n\nFull Model Architecture\n-----------------------\n\n\nCiting & Authors\n----------------\n\n\nThis model was trained by KBLab, a data lab at the National Library of Sweden.\n\n\nYou can cite the article on our blog: URL .\n\n\nAcknowledgements\n----------------\n\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #sv #arxiv-2004.09813 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### Loading an older model version (Sentence-Transformers)\n\n\nCurrently, the easiest way to load an older model version is to clone the model repository and load it from disk. For example, to clone the v1.0 model:\n\n\nThen you can load the model by pointing to the local folder where you cloned the model:\n\n\nUsage (HuggingFace Transformers)\n--------------------------------\n\n\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"### Loading an older model (Hugginfface Transformers)\n\n\nTo load an older model specify the version tag with the 'revision' arg. For example, to load the v1.0 model, use the following code:\n\n\nEvaluation Results\n------------------\n\n\nThe model was evaluated on SweParaphrase v1.0 and SweParaphrase v2.0. This test set is part of SuperLim -- a Swedish evaluation suite for natural langage understanding tasks. We calculated Pearson and Spearman correlation between predicted model similarity scores and the human similarity score labels. Results from SweParaphrase v1.0 are displayed below.\n\n\nModel version: v1.0, Pearson: 0.9183, Spearman: 0.9114\nModel version: v1.1, Pearson: 0.9183, Spearman: 0.9114\nModel version: v2.0, Pearson: 0.9283, Spearman: 0.9130\n\n\nThe following code snippet can be used to reproduce the above results:",
"### Sweparaphrase v2.0\n\n\nIn general, v1.1 correlates the most with human assessment of text similarity on SweParaphrase v2.0. Below, we present zero-shot evaluation results on all data splits. They display the model's performance out of the box, without any fine-tuning.",
"### SweFAQ v2.0\n\n\nWhen it comes to retrieval tasks, v2.0 performs the best by quite a substantial margin. It is better at matching the correct answer to a question compared to v1.1 and v1.0.\n\n\nModel version: v1.0, Data split: train, Accuracy: 0.5262\nModel version: v1.1, Data split: train, Accuracy: 0.6236\nModel version: v2.0, Data split: train, Accuracy: 0.7106\nModel version: v1.0, Data split: dev, Accuracy: 0.4636\nModel version: v1.1, Data split: dev, Accuracy: 0.5818\nModel version: v2.0, Data split: dev, Accuracy: 0.6727\nModel version: v1.0, Data split: test, Accuracy: 0.4495\nModel version: v1.1, Data split: test, Accuracy: 0.5229\nModel version: v2.0, Data split: test, Accuracy: 0.5871\n\n\nExamples how to evaluate the models on some of the test sets of the SuperLim suites can be found on the following links: evaluate\\_faq.py (Swedish FAQ), evaluate\\_swesat.py (SweSAT synonyms), evaluate\\_supersim.py (SuperSim).\n\n\nTraining\n--------\n\n\nAn article with more details on data and v1.0 of the model can be found on the KBLab blog.\n\n\nAround 14.6 million sentences from English-Swedish parallel corpuses were used to train the model. Data was sourced from the Open Parallel Corpus (OPUS) and downloaded via the python package opustools. Datasets used were: JW300, Europarl, DGT-TM, EMEA, ELITR-ECA, TED2020, Tatoeba and OpenSubtitles.\n\n\nThe model was trained with the parameters:\n\n\nDataLoader:\n\n\n'URL.dataloader.DataLoader' of length 180513 with parameters:\n\n\nLoss:\n\n\n'sentence\\_transformers.losses.MSELoss.MSELoss'\n\n\nParameters of the fit()-Method:\n\n\nFull Model Architecture\n-----------------------\n\n\nCiting & Authors\n----------------\n\n\nThis model was trained by KBLab, a data lab at the National Library of Sweden.\n\n\nYou can cite the article on our blog: URL .\n\n\nAcknowledgements\n----------------\n\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] | [
55,
155,
232,
73,
612
] | [
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #sv #arxiv-2004.09813 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n### Loading an older model version (Sentence-Transformers)\n\n\nCurrently, the easiest way to load an older model version is to clone the model repository and load it from disk. For example, to clone the v1.0 model:\n\n\nThen you can load the model by pointing to the local folder where you cloned the model:\n\n\nUsage (HuggingFace Transformers)\n--------------------------------\n\n\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.### Loading an older model (Hugginfface Transformers)\n\n\nTo load an older model specify the version tag with the 'revision' arg. For example, to load the v1.0 model, use the following code:\n\n\nEvaluation Results\n------------------\n\n\nThe model was evaluated on SweParaphrase v1.0 and SweParaphrase v2.0. This test set is part of SuperLim -- a Swedish evaluation suite for natural langage understanding tasks. We calculated Pearson and Spearman correlation between predicted model similarity scores and the human similarity score labels. Results from SweParaphrase v1.0 are displayed below.\n\n\nModel version: v1.0, Pearson: 0.9183, Spearman: 0.9114\nModel version: v1.1, Pearson: 0.9183, Spearman: 0.9114\nModel version: v2.0, Pearson: 0.9283, Spearman: 0.9130\n\n\nThe following code snippet can be used to reproduce the above results:### Sweparaphrase v2.0\n\n\nIn general, v1.1 correlates the most with human assessment of text similarity on SweParaphrase v2.0. Below, we present zero-shot evaluation results on all data splits. They display the model's performance out of the box, without any fine-tuning.### SweFAQ v2.0\n\n\nWhen it comes to retrieval tasks, v2.0 performs the best by quite a substantial margin. It is better at matching the correct answer to a question compared to v1.1 and v1.0.\n\n\nModel version: v1.0, Data split: train, Accuracy: 0.5262\nModel version: v1.1, Data split: train, Accuracy: 0.6236\nModel version: v2.0, Data split: train, Accuracy: 0.7106\nModel version: v1.0, Data split: dev, Accuracy: 0.4636\nModel version: v1.1, Data split: dev, Accuracy: 0.5818\nModel version: v2.0, Data split: dev, Accuracy: 0.6727\nModel version: v1.0, Data split: test, Accuracy: 0.4495\nModel version: v1.1, Data split: test, Accuracy: 0.5229\nModel version: v2.0, Data split: test, Accuracy: 0.5871\n\n\nExamples how to evaluate the models on some of the test sets of the SuperLim suites can be found on the following links: evaluate\\_faq.py (Swedish FAQ), evaluate\\_swesat.py (SweSAT synonyms), evaluate\\_supersim.py (SuperSim).\n\n\nTraining\n--------\n\n\nAn article with more details on data and v1.0 of the model can be found on the KBLab blog.\n\n\nAround 14.6 million sentences from English-Swedish parallel corpuses were used to train the model. Data was sourced from the Open Parallel Corpus (OPUS) and downloaded via the python package opustools. Datasets used were: JW300, Europarl, DGT-TM, EMEA, ELITR-ECA, TED2020, Tatoeba and OpenSubtitles.\n\n\nThe model was trained with the parameters:\n\n\nDataLoader:\n\n\n'URL.dataloader.DataLoader' of length 180513 with parameters:\n\n\nLoss:\n\n\n'sentence\\_transformers.losses.MSELoss.MSELoss'\n\n\nParameters of the fit()-Method:\n\n\nFull Model Architecture\n-----------------------\n\n\nCiting & Authors\n----------------\n\n\nThis model was trained by KBLab, a data lab at the National Library of Sweden.\n\n\nYou can cite the article on our blog: URL .\n\n\nAcknowledgements\n----------------\n\n\nWe gratefully acknowledge the HPC RIVR consortium (URL) and EuroHPC JU (URL for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (URL)."
] |
automatic-speech-recognition | transformers | # Wav2vec 2.0 base-voxpopuli-sv-swedish
Finetuned version of Facebooks [VoxPopuli-sv base](https://huggingface.co/facebook/wav2vec2-base-sv-voxpopuli) model using NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **5.62%**, WER for Common Voice test set is **19.15%**.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-base-voxpopuli-sv-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-base-voxpopuli-sv-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` | {"language": "sv-SE", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "speech", "voxpopuli"], "datasets": ["common_voice", "NST Swedish ASR Database"], "metrics": ["wer"]} | KBLab/wav2vec2-base-voxpopuli-sv-swedish | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"voxpopuli",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv-SE"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #voxpopuli #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| # Wav2vec 2.0 base-voxpopuli-sv-swedish
Finetuned version of Facebooks VoxPopuli-sv base model using NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 5.62%, WER for Common Voice test set is 19.15%.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
| [
"# Wav2vec 2.0 base-voxpopuli-sv-swedish\nFinetuned version of Facebooks VoxPopuli-sv base model using NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 5.62%, WER for Common Voice test set is 19.15%.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #voxpopuli #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"# Wav2vec 2.0 base-voxpopuli-sv-swedish\nFinetuned version of Facebooks VoxPopuli-sv base model using NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 5.62%, WER for Common Voice test set is 19.15%.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:"
] | [
50,
105,
18
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #speech #voxpopuli #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n# Wav2vec 2.0 base-voxpopuli-sv-swedish\nFinetuned version of Facebooks VoxPopuli-sv base model using NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 5.62%, WER for Common Voice test set is 19.15%.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.## Usage\nThe model can be used directly (without a language model) as follows:"
] |
automatic-speech-recognition | transformers | # Wav2vec 2.0 large-voxpopuli-sv-swedish
**PLEASE NOTE that [this](https://huggingface.co/KBLab/wav2vec2-large-voxrex-swedish) model performs better and has a less restrictive license.**
Additionally pretrained and finetuned version of Facebooks [VoxPopuli-sv large](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **3.95%**. WER for Common Voice test set is **10.99%** directly and **7.82%** with a 4-gram language model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Training
This model has additionally pretrained on 1000h of Swedish local radio broadcasts, fine-tuned for 120000 updates on NST + CommonVoice and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed].
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxpopuli-sv-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxpopuli-sv-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
``` | {"language": "sv-SE", "license": "cc-by-nc-4.0", "tags": ["audio", "automatic-speech-recognition", "speech", "voxpopuli"], "datasets": ["common_voice", "NST Swedish ASR Database"], "metrics": ["wer", "cer"], "model-index": [{"name": "Wav2vec 2.0 large VoxPopuli-sv swedish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 10.994764, "name": "Test WER"}, {"type": "cer", "value": 3.946846, "name": "Test CER"}]}]}]} | KBLab/wav2vec2-large-voxpopuli-sv-swedish | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"voxpopuli",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv-SE"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #voxpopuli #license-cc-by-nc-4.0 #model-index #endpoints_compatible #region-us
| # Wav2vec 2.0 large-voxpopuli-sv-swedish
PLEASE NOTE that this model performs better and has a less restrictive license.
Additionally pretrained and finetuned version of Facebooks VoxPopuli-sv large model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 3.95%. WER for Common Voice test set is 10.99% directly and 7.82% with a 4-gram language model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Training
This model has additionally pretrained on 1000h of Swedish local radio broadcasts, fine-tuned for 120000 updates on NST + CommonVoice and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed].
## Usage
The model can be used directly (without a language model) as follows:
| [
"# Wav2vec 2.0 large-voxpopuli-sv-swedish\n\nPLEASE NOTE that this model performs better and has a less restrictive license.\n\n\nAdditionally pretrained and finetuned version of Facebooks VoxPopuli-sv large model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 3.95%. WER for Common Voice test set is 10.99% directly and 7.82% with a 4-gram language model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Training\nThis model has additionally pretrained on 1000h of Swedish local radio broadcasts, fine-tuned for 120000 updates on NST + CommonVoice and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed].",
"## Usage\nThe model can be used directly (without a language model) as follows:"
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #voxpopuli #license-cc-by-nc-4.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2vec 2.0 large-voxpopuli-sv-swedish\n\nPLEASE NOTE that this model performs better and has a less restrictive license.\n\n\nAdditionally pretrained and finetuned version of Facebooks VoxPopuli-sv large model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 3.95%. WER for Common Voice test set is 10.99% directly and 7.82% with a 4-gram language model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Training\nThis model has additionally pretrained on 1000h of Swedish local radio broadcasts, fine-tuned for 120000 updates on NST + CommonVoice and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed].",
"## Usage\nThe model can be used directly (without a language model) as follows:"
] | [
56,
141,
99,
18
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #voxpopuli #license-cc-by-nc-4.0 #model-index #endpoints_compatible #region-us \n# Wav2vec 2.0 large-voxpopuli-sv-swedish\n\nPLEASE NOTE that this model performs better and has a less restrictive license.\n\n\nAdditionally pretrained and finetuned version of Facebooks VoxPopuli-sv large model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 3.95%. WER for Common Voice test set is 10.99% directly and 7.82% with a 4-gram language model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.## Training\nThis model has additionally pretrained on 1000h of Swedish local radio broadcasts, fine-tuned for 120000 updates on NST + CommonVoice and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed].## Usage\nThe model can be used directly (without a language model) as follows:"
] |
automatic-speech-recognition | transformers | # Wav2vec 2.0 large VoxRex Swedish (C)
Finetuned version of KBs [VoxRex large](https://huggingface.co/KBLab/wav2vec2-large-voxrex) model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is **2.5%**. WER for Common Voice test set is **8.49%** directly and **7.37%** with a 4-gram language model.
When using this model, make sure that your speech input is sampled at 16kHz.
**Update 2022-01-10:** Updated to VoxRex-C version.
**Update 2022-05-16:** Paper is is [here](https://arxiv.org/abs/2205.03026).
# Performance\*

<center><del>*<i>Chart shows performance without the additional 20k steps of Common Voice fine-tuning</i></del></center>
## Training
This model has been fine-tuned for 120000 updates on NST + CommonVoice<del> and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]</del>.

## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-voxrex-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Citation
https://arxiv.org/abs/2205.03026
```
@misc{malmsten2022hearing,
title={Hearing voices at the National Library -- a speech corpus and acoustic model for the Swedish language},
author={Martin Malmsten and Chris Haffenden and Love Börjeson},
year={2022},
eprint={2205.03026},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"language": "sv", "license": "cc0-1.0", "tags": ["audio", "automatic-speech-recognition", "speech", "hf-asr-leaderboard"], "datasets": ["common_voice", "NST_Swedish_ASR_Database", "P4"], "metrics": ["wer"], "arxiv": "https://arxiv.org/abs/2205.03026", "model-index": [{"name": "Wav2vec 2.0 large VoxRex Swedish", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 8.49, "name": "Test WER"}]}]}]} | KBLab/wav2vec2-large-voxrex-swedish | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"hf-asr-leaderboard",
"sv",
"dataset:common_voice",
"dataset:NST_Swedish_ASR_Database",
"dataset:P4",
"arxiv:2205.03026",
"license:cc0-1.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2205.03026"
] | [
"sv"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #sv #dataset-common_voice #dataset-NST_Swedish_ASR_Database #dataset-P4 #arxiv-2205.03026 #license-cc0-1.0 #model-index #endpoints_compatible #region-us
| # Wav2vec 2.0 large VoxRex Swedish (C)
Finetuned version of KBs VoxRex large model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 2.5%. WER for Common Voice test set is 8.49% directly and 7.37% with a 4-gram language model.
When using this model, make sure that your speech input is sampled at 16kHz.
Update 2022-01-10: Updated to VoxRex-C version.
Update 2022-05-16: Paper is is here.
# Performance\*
!Comparison
<center><del>*<i>Chart shows performance without the additional 20k steps of Common Voice fine-tuning</i></del></center>
## Training
This model has been fine-tuned for 120000 updates on NST + CommonVoice<del> and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]</del>.
!WER during training
## Usage
The model can be used directly (without a language model) as follows:
URL
| [
"# Wav2vec 2.0 large VoxRex Swedish (C)\n\nFinetuned version of KBs VoxRex large model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 2.5%. WER for Common Voice test set is 8.49% directly and 7.37% with a 4-gram language model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nUpdate 2022-01-10: Updated to VoxRex-C version.\n\nUpdate 2022-05-16: Paper is is here.",
"# Performance\\*\n\n!Comparison\n<center><del>*<i>Chart shows performance without the additional 20k steps of Common Voice fine-tuning</i></del></center>",
"## Training\nThis model has been fine-tuned for 120000 updates on NST + CommonVoice<del> and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]</del>.\n\n!WER during training",
"## Usage\nThe model can be used directly (without a language model) as follows:\n\n\nURL"
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #sv #dataset-common_voice #dataset-NST_Swedish_ASR_Database #dataset-P4 #arxiv-2205.03026 #license-cc0-1.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2vec 2.0 large VoxRex Swedish (C)\n\nFinetuned version of KBs VoxRex large model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 2.5%. WER for Common Voice test set is 8.49% directly and 7.37% with a 4-gram language model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nUpdate 2022-01-10: Updated to VoxRex-C version.\n\nUpdate 2022-05-16: Paper is is here.",
"# Performance\\*\n\n!Comparison\n<center><del>*<i>Chart shows performance without the additional 20k steps of Common Voice fine-tuning</i></del></center>",
"## Training\nThis model has been fine-tuned for 120000 updates on NST + CommonVoice<del> and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]</del>.\n\n!WER during training",
"## Usage\nThe model can be used directly (without a language model) as follows:\n\n\nURL"
] | [
99,
149,
43,
99,
20
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #hf-asr-leaderboard #sv #dataset-common_voice #dataset-NST_Swedish_ASR_Database #dataset-P4 #arxiv-2205.03026 #license-cc0-1.0 #model-index #endpoints_compatible #region-us \n# Wav2vec 2.0 large VoxRex Swedish (C)\n\nFinetuned version of KBs VoxRex large model using Swedish radio broadcasts, NST and Common Voice data. Evalutation without a language model gives the following: WER for NST + Common Voice test set (2% of total sentences) is 2.5%. WER for Common Voice test set is 8.49% directly and 7.37% with a 4-gram language model.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nUpdate 2022-01-10: Updated to VoxRex-C version.\n\nUpdate 2022-05-16: Paper is is here.# Performance\\*\n\n!Comparison\n<center><del>*<i>Chart shows performance without the additional 20k steps of Common Voice fine-tuning</i></del></center>## Training\nThis model has been fine-tuned for 120000 updates on NST + CommonVoice<del> and then for an additional 20000 updates on CommonVoice only. The additional fine-tuning on CommonVoice hurts performance on the NST+CommonVoice test set somewhat and, unsurprisingly, improves it on the CommonVoice test set. It seems to perform generally better though [citation needed]</del>.\n\n!WER during training## Usage\nThe model can be used directly (without a language model) as follows:\n\n\nURL"
] |
automatic-speech-recognition | transformers |
# Wav2vec 2.0 large VoxRex (C)
**Please note:** The model hosted in this repository is a pretrained wav2vec2 without a CTC head, as such it cannot do speech-to-text. If you are interested in speech-to-text, see our finetuned version of this model, which can be found at [KBLab/wav2vec2-large-voxrex-swedish](https://huggingface.co/KBLab/wav2vec2-large-voxrex-swedish). The weights found in this repository are from the pure acoustic model after unsupervised pretraining. This model is suitable for anyone interested in i) continued wav2vec2-pretraining with your own unsupervised data, ii) a feature extractor for finetuning your own downstream tasks (e.g. if you want to train your own CTC head, or an audio classifier).
**Disclaimer:** This is a work in progress.<br>
**Update 2022-01-08:** Updated to VoxRex-C version, use git to get the older (B) version.<br>
**Update 2022-05-16:** Paper is is [here](https://arxiv.org/abs/2205.03026).
This model has been pretrained for 400,000 updates on the P4-10k corpus which contains 10 000 hours of swedish local public service radio as well as 1500 hours of audio books and other speech from KBs collections.

| {"language": "sv", "license": "cc0-1.0", "tags": ["audio", "automatic-speech-recognition", "voxrex"]} | KBLab/wav2vec2-large-voxrex | null | [
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxrex",
"sv",
"arxiv:2205.03026",
"license:cc0-1.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2205.03026"
] | [
"sv"
] | TAGS
#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxrex #sv #arxiv-2205.03026 #license-cc0-1.0 #endpoints_compatible #region-us
|
# Wav2vec 2.0 large VoxRex (C)
Please note: The model hosted in this repository is a pretrained wav2vec2 without a CTC head, as such it cannot do speech-to-text. If you are interested in speech-to-text, see our finetuned version of this model, which can be found at KBLab/wav2vec2-large-voxrex-swedish. The weights found in this repository are from the pure acoustic model after unsupervised pretraining. This model is suitable for anyone interested in i) continued wav2vec2-pretraining with your own unsupervised data, ii) a feature extractor for finetuning your own downstream tasks (e.g. if you want to train your own CTC head, or an audio classifier).
Disclaimer: This is a work in progress.<br>
Update 2022-01-08: Updated to VoxRex-C version, use git to get the older (B) version.<br>
Update 2022-05-16: Paper is is here.
This model has been pretrained for 400,000 updates on the P4-10k corpus which contains 10 000 hours of swedish local public service radio as well as 1500 hours of audio books and other speech from KBs collections.
!Accuracy during training
| [
"# Wav2vec 2.0 large VoxRex (C)\n\nPlease note: The model hosted in this repository is a pretrained wav2vec2 without a CTC head, as such it cannot do speech-to-text. If you are interested in speech-to-text, see our finetuned version of this model, which can be found at KBLab/wav2vec2-large-voxrex-swedish. The weights found in this repository are from the pure acoustic model after unsupervised pretraining. This model is suitable for anyone interested in i) continued wav2vec2-pretraining with your own unsupervised data, ii) a feature extractor for finetuning your own downstream tasks (e.g. if you want to train your own CTC head, or an audio classifier). \n\nDisclaimer: This is a work in progress.<br>\nUpdate 2022-01-08: Updated to VoxRex-C version, use git to get the older (B) version.<br>\nUpdate 2022-05-16: Paper is is here.\n\nThis model has been pretrained for 400,000 updates on the P4-10k corpus which contains 10 000 hours of swedish local public service radio as well as 1500 hours of audio books and other speech from KBs collections.\n\n!Accuracy during training"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxrex #sv #arxiv-2205.03026 #license-cc0-1.0 #endpoints_compatible #region-us \n",
"# Wav2vec 2.0 large VoxRex (C)\n\nPlease note: The model hosted in this repository is a pretrained wav2vec2 without a CTC head, as such it cannot do speech-to-text. If you are interested in speech-to-text, see our finetuned version of this model, which can be found at KBLab/wav2vec2-large-voxrex-swedish. The weights found in this repository are from the pure acoustic model after unsupervised pretraining. This model is suitable for anyone interested in i) continued wav2vec2-pretraining with your own unsupervised data, ii) a feature extractor for finetuning your own downstream tasks (e.g. if you want to train your own CTC head, or an audio classifier). \n\nDisclaimer: This is a work in progress.<br>\nUpdate 2022-01-08: Updated to VoxRex-C version, use git to get the older (B) version.<br>\nUpdate 2022-05-16: Paper is is here.\n\nThis model has been pretrained for 400,000 updates on the P4-10k corpus which contains 10 000 hours of swedish local public service radio as well as 1500 hours of audio books and other speech from KBs collections.\n\n!Accuracy during training"
] | [
62,
297
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #audio #automatic-speech-recognition #voxrex #sv #arxiv-2205.03026 #license-cc0-1.0 #endpoints_compatible #region-us \n# Wav2vec 2.0 large VoxRex (C)\n\nPlease note: The model hosted in this repository is a pretrained wav2vec2 without a CTC head, as such it cannot do speech-to-text. If you are interested in speech-to-text, see our finetuned version of this model, which can be found at KBLab/wav2vec2-large-voxrex-swedish. The weights found in this repository are from the pure acoustic model after unsupervised pretraining. This model is suitable for anyone interested in i) continued wav2vec2-pretraining with your own unsupervised data, ii) a feature extractor for finetuning your own downstream tasks (e.g. if you want to train your own CTC head, or an audio classifier). \n\nDisclaimer: This is a work in progress.<br>\nUpdate 2022-01-08: Updated to VoxRex-C version, use git to get the older (B) version.<br>\nUpdate 2022-05-16: Paper is is here.\n\nThis model has been pretrained for 400,000 updates on the P4-10k corpus which contains 10 000 hours of swedish local public service radio as well as 1500 hours of audio books and other speech from KBs collections.\n\n!Accuracy during training"
] |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [NST Swedish Dictation](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-17/).
When using this model, make sure that your speech input is sampled at 16kHz.
**Note:** We recommend using our newer model [wav2vec2-large-voxrex-swedish](https://huggingface.co/KBLab/wav2vec2-large-voxrex-swedish) for the best performance.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
model = Wav2Vec2ForCTC.from_pretrained("KBLab/wav2vec2-large-xlsr-53-swedish")
model.to("cuda")
chars_to_ignore_regex = '[,?.!\\-;:"“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**WER**: 14.298610%
**CER**: 4.925294%
## Training
First the XLSR model was further pre-trained for 50 epochs with a corpus consisting of 1000 hours spoken Swedish from various radio stations. Secondly [NST Swedish Dictation](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-17/) was used for fine tuning as well as [Common Voice](https://commonvoice.mozilla.org/en/datasets). Lastly only Common Voice dataset was used for final finetuning. The [Fairseq](https://github.com/fairseq) scripts were used.
| {"language": "sv", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "KTH/nst"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Swedish by KBLab", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sv-SE", "type": "common_voice", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 14.29861, "name": "Test WER"}, {"type": "cer", "value": 4.925294, "name": "Test CER"}]}]}]} | KBLab/wav2vec2-large-xlsr-53-swedish | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sv",
"dataset:common_voice",
"dataset:KTH/nst",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sv #dataset-common_voice #dataset-KTH/nst #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the NST Swedish Dictation.
When using this model, make sure that your speech input is sampled at 16kHz.
Note: We recommend using our newer model wav2vec2-large-voxrex-swedish for the best performance.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Swedish test data of Common Voice.
WER: 14.298610%
CER: 4.925294%
## Training
First the XLSR model was further pre-trained for 50 epochs with a corpus consisting of 1000 hours spoken Swedish from various radio stations. Secondly NST Swedish Dictation was used for fine tuning as well as Common Voice. Lastly only Common Voice dataset was used for final finetuning. The Fairseq scripts were used.
| [
"# Wav2Vec2-Large-XLSR-53-Swedish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the NST Swedish Dictation.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nNote: We recommend using our newer model wav2vec2-large-voxrex-swedish for the best performance.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Swedish test data of Common Voice.\n\n\n\n\nWER: 14.298610%\nCER: 4.925294%",
"## Training\n\nFirst the XLSR model was further pre-trained for 50 epochs with a corpus consisting of 1000 hours spoken Swedish from various radio stations. Secondly NST Swedish Dictation was used for fine tuning as well as Common Voice. Lastly only Common Voice dataset was used for final finetuning. The Fairseq scripts were used."
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sv #dataset-common_voice #dataset-KTH/nst #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Swedish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the NST Swedish Dictation.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nNote: We recommend using our newer model wav2vec2-large-voxrex-swedish for the best performance.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Swedish test data of Common Voice.\n\n\n\n\nWER: 14.298610%\nCER: 4.925294%",
"## Training\n\nFirst the XLSR model was further pre-trained for 50 epochs with a corpus consisting of 1000 hours spoken Swedish from various radio stations. Secondly NST Swedish Dictation was used for fine tuning as well as Common Voice. Lastly only Common Voice dataset was used for final finetuning. The Fairseq scripts were used."
] | [
75,
90,
18,
38,
71
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sv #dataset-common_voice #dataset-KTH/nst #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# Wav2Vec2-Large-XLSR-53-Swedish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the NST Swedish Dictation.\nWhen using this model, make sure that your speech input is sampled at 16kHz.\n\nNote: We recommend using our newer model wav2vec2-large-voxrex-swedish for the best performance.## Usage\n\nThe model can be used directly (without a language model) as follows:## Evaluation\n\nThe model can be evaluated as follows on the Swedish test data of Common Voice.\n\n\n\n\nWER: 14.298610%\nCER: 4.925294%## Training\n\nFirst the XLSR model was further pre-trained for 50 epochs with a corpus consisting of 1000 hours spoken Swedish from various radio stations. Secondly NST Swedish Dictation was used for fine tuning as well as Common Voice. Lastly only Common Voice dataset was used for final finetuning. The Fairseq scripts were used."
] |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | KENNETHFOO/DialoGPT-medium-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
39,
7
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] |
text2text-generation | transformers |
# Model
This model utilises T5-base pre-trained model. It was fine tuned using a modified version of the [JFLEG](https://arxiv.org/abs/1702.04066) dataset and [Happy Transformer framework](https://github.com/EricFillion/happy-transformer). This model was fine-tuned for sentence correction on normal English translations and positional English translations of local Caribbean English Creole. This model will be updated periodically as more data is compiled. For more on the Caribbean English Creole checkout the library [Caribe](https://pypi.org/project/Caribe/).
___
# Re-training/Fine Tuning
The results of fine-tuning resulted in a final accuracy of 92%
# Usage
```python
from happytransformer import HappyTextToText, TTSettings
pre_trained_model="T5"
model = HappyTextToText(pre_trained_model, "KES/T5-KES")
arguments = TTSettings(num_beams=4, min_length=1)
sentence = "Wat iz your nam"
correction = model.generate_text("grammar: "+sentence, args=arguments)
if(correction.text.find(" .")):
correction.text=correction.text.replace(" .", ".")
print(correction.text) # Correction: "What is your name?".
```
___
# Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KES/T5-KES")
model = AutoModelForSeq2SeqLM.from_pretrained("KES/T5-KES")
text = "I am lived with my parenmts "
inputs = tokenizer("grammar:"+text, truncation=True, return_tensors='pt')
output = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True)
correction=tokenizer.batch_decode(output, skip_special_tokens=True)
print("".join(correction)) #Correction: I am living with my parents.
```
___
| {"language": "en", "license": "cc-by-nc-sa-4.0", "tags": ["sentence correction", "text2text-generation"], "datasets": ["jfleg"]} | KES/T5-KES | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"sentence correction",
"en",
"dataset:jfleg",
"arxiv:1702.04066",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1702.04066"
] | [
"en"
] | TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #sentence correction #en #dataset-jfleg #arxiv-1702.04066 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model
This model utilises T5-base pre-trained model. It was fine tuned using a modified version of the JFLEG dataset and Happy Transformer framework. This model was fine-tuned for sentence correction on normal English translations and positional English translations of local Caribbean English Creole. This model will be updated periodically as more data is compiled. For more on the Caribbean English Creole checkout the library Caribe.
___
# Re-training/Fine Tuning
The results of fine-tuning resulted in a final accuracy of 92%
# Usage
___
# Usage with Transformers
___
| [
"# Model\nThis model utilises T5-base pre-trained model. It was fine tuned using a modified version of the JFLEG dataset and Happy Transformer framework. This model was fine-tuned for sentence correction on normal English translations and positional English translations of local Caribbean English Creole. This model will be updated periodically as more data is compiled. For more on the Caribbean English Creole checkout the library Caribe.\n\n___",
"# Re-training/Fine Tuning\n\nThe results of fine-tuning resulted in a final accuracy of 92%",
"# Usage \n\n\n\n\n___",
"# Usage with Transformers\n\n\n___"
] | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #sentence correction #en #dataset-jfleg #arxiv-1702.04066 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model\nThis model utilises T5-base pre-trained model. It was fine tuned using a modified version of the JFLEG dataset and Happy Transformer framework. This model was fine-tuned for sentence correction on normal English translations and positional English translations of local Caribbean English Creole. This model will be updated periodically as more data is compiled. For more on the Caribbean English Creole checkout the library Caribe.\n\n___",
"# Re-training/Fine Tuning\n\nThe results of fine-tuning resulted in a final accuracy of 92%",
"# Usage \n\n\n\n\n___",
"# Usage with Transformers\n\n\n___"
] | [
77,
90,
21,
5,
7
] | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #sentence correction #en #dataset-jfleg #arxiv-1702.04066 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model\nThis model utilises T5-base pre-trained model. It was fine tuned using a modified version of the JFLEG dataset and Happy Transformer framework. This model was fine-tuned for sentence correction on normal English translations and positional English translations of local Caribbean English Creole. This model will be updated periodically as more data is compiled. For more on the Caribbean English Creole checkout the library Caribe.\n\n___# Re-training/Fine Tuning\n\nThe results of fine-tuning resulted in a final accuracy of 92%# Usage \n\n\n\n\n___# Usage with Transformers\n\n\n___"
] |
text2text-generation | transformers | # Trinidad English Creole Parser
This model was trained as a parser to Trinidad English Creole.
---
# Model
This model utilises T5-base pre-trained model. It was fine tuned using a combination of a custom dataset and creolised [JFLEG](https://arxiv.org/abs/1702.04066) dataset. JFLEG dataset was creolised using the file encoding feature of the Caribe library. For more on Caribbean Creole checkout the library [Caribe](https://pypi.org/project/Caribe/).
___
# Usage with Transformers
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KES/T5-TTParser")
model = AutoModelForSeq2SeqLM.from_pretrained("KES/T5-TTParser")
txt = "Ah have live with mi paremnts en London"
inputs = tokenizer("grammar:"+txt, truncation=True, return_tensors='pt')
output = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True)
correction=tokenizer.batch_decode(output, skip_special_tokens=True)
print("".join(correction)) #Correction: Ah live with meh parents in London.
``` | {"language": "en", "license": "cc-by-nc-sa-4.0", "tags": ["Trinidad and Tobago English Parser", "text2text-generation", "Caribe"], "datasets": ["Custom dataset", "Creolised JFLEG"]} | KES/T5-TTParser | null | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Trinidad and Tobago English Parser",
"Caribe",
"en",
"arxiv:1702.04066",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1702.04066"
] | [
"en"
] | TAGS
#transformers #pytorch #t5 #text2text-generation #Trinidad and Tobago English Parser #Caribe #en #arxiv-1702.04066 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # Trinidad English Creole Parser
This model was trained as a parser to Trinidad English Creole.
---
# Model
This model utilises T5-base pre-trained model. It was fine tuned using a combination of a custom dataset and creolised JFLEG dataset. JFLEG dataset was creolised using the file encoding feature of the Caribe library. For more on Caribbean Creole checkout the library Caribe.
___
# Usage with Transformers
| [
"# Trinidad English Creole Parser\nThis model was trained as a parser to Trinidad English Creole.\n\n---",
"# Model\nThis model utilises T5-base pre-trained model. It was fine tuned using a combination of a custom dataset and creolised JFLEG dataset. JFLEG dataset was creolised using the file encoding feature of the Caribe library. For more on Caribbean Creole checkout the library Caribe.\n\n___",
"# Usage with Transformers"
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #Trinidad and Tobago English Parser #Caribe #en #arxiv-1702.04066 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Trinidad English Creole Parser\nThis model was trained as a parser to Trinidad English Creole.\n\n---",
"# Model\nThis model utilises T5-base pre-trained model. It was fine tuned using a combination of a custom dataset and creolised JFLEG dataset. JFLEG dataset was creolised using the file encoding feature of the Caribe library. For more on Caribbean Creole checkout the library Caribe.\n\n___",
"# Usage with Transformers"
] | [
73,
22,
75,
4
] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #Trinidad and Tobago English Parser #Caribe #en #arxiv-1702.04066 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Trinidad English Creole Parser\nThis model was trained as a parser to Trinidad English Creole.\n\n---# Model\nThis model utilises T5-base pre-trained model. It was fine tuned using a combination of a custom dataset and creolised JFLEG dataset. JFLEG dataset was creolised using the file encoding feature of the Caribe library. For more on Caribbean Creole checkout the library Caribe.\n\n___# Usage with Transformers"
] |
text2text-generation | transformers |
# Model Card for ke-t5-base-ko
# Model Details
## Model Description
- **Developed by:** Korea Electronics Technology Institute Artificial Intelligence Research Center
- **Shared by [Optional]:** More information needed
- **Model type:** Text2Text Generation
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Related Models:**
- **Parent Model:** T5
- **Resources for more information:**
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- [KE-T5 Github Repo](https://github.com/AIRC-KETI/ke-t5)
- [Paper](https://aclanthology.org/2021.findings-emnlp.33/)
- [Associated Paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
# Uses
## Direct Use
This model can be used for the task of Text2Text Generation
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
See the [t5-base model card](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) for further information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
```
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
```
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Korea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-base-ko")
model = AutoModelForSeq2SeqLM.from_pretrained("KETI-AIR/ke-t5-base-ko")
```
</details>
| {"language": "ko", "license": "apache-2.0", "tags": ["t5"], "eos_token": "</s>", "widget": [{"text": "\uc544\ubc84\uc9c0\uac00 \ubc29\uc5d0 \ub4e4\uc5b4\uac00\uc2e0\ub2e4.</s>"}]} | KETI-AIR/ke-t5-base-ko | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"ko",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.09700"
] | [
"ko"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model Card for ke-t5-base-ko
# Model Details
## Model Description
- Developed by: Korea Electronics Technology Institute Artificial Intelligence Research Center
- Shared by [Optional]: More information needed
- Model type: Text2Text Generation
- Language(s) (NLP): More information needed
- License: More information needed
- Related Models:
- Parent Model: T5
- Resources for more information:
- GitHub Repo
- KE-T5 Github Repo
- Paper
- Associated Paper
- Blog Post
# Uses
## Direct Use
This model can be used for the task of Text2Text Generation
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.
The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
See the t5-base model card for further information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
BibTeX:
APA:
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Korea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
</details>
| [
"# Model Card for ke-t5-base-ko",
"# Model Details",
"## Model Description\n \n \n- Developed by: Korea Electronics Technology Institute Artificial Intelligence Research Center\n- Shared by [Optional]: More information needed\n- Model type: Text2Text Generation\n- Language(s) (NLP): More information needed\n- License: More information needed\n- Related Models:\n - Parent Model: T5\n- Resources for more information: \n - GitHub Repo\n - KE-T5 Github Repo\n - Paper\n - Associated Paper\n - Blog Post",
"# Uses",
"## Direct Use\n \nThis model can be used for the task of Text2Text Generation",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n \nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\n \n See the t5-base model card for further information.",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nMore information needed",
"### Factors",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\nMore information needed\n \nBibTeX:\n \n \n\n\n \n \nAPA:",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\n \nMore information needed",
"# Model Card Authors [optional]\n \n \nKorea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for ke-t5-base-ko",
"# Model Details",
"## Model Description\n \n \n- Developed by: Korea Electronics Technology Institute Artificial Intelligence Research Center\n- Shared by [Optional]: More information needed\n- Model type: Text2Text Generation\n- Language(s) (NLP): More information needed\n- License: More information needed\n- Related Models:\n - Parent Model: T5\n- Resources for more information: \n - GitHub Repo\n - KE-T5 Github Repo\n - Paper\n - Associated Paper\n - Blog Post",
"# Uses",
"## Direct Use\n \nThis model can be used for the task of Text2Text Generation",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n \nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\n \n See the t5-base model card for further information.",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nMore information needed",
"### Factors",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\nMore information needed\n \nBibTeX:\n \n \n\n\n \n \nAPA:",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\n \nMore information needed",
"# Model Card Authors [optional]\n \n \nKorea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] | [
70,
12,
3,
93,
2,
17,
10,
25,
70,
33,
3,
82,
4,
10,
11,
2,
9,
8,
4,
8,
6,
6,
63,
6,
9,
7,
7,
14,
9,
9,
28,
7,
36
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# Model Card for ke-t5-base-ko# Model Details## Model Description\n \n \n- Developed by: Korea Electronics Technology Institute Artificial Intelligence Research Center\n- Shared by [Optional]: More information needed\n- Model type: Text2Text Generation\n- Language(s) (NLP): More information needed\n- License: More information needed\n- Related Models:\n - Parent Model: T5\n- Resources for more information: \n - GitHub Repo\n - KE-T5 Github Repo\n - Paper\n - Associated Paper\n - Blog Post# Uses## Direct Use\n \nThis model can be used for the task of Text2Text Generation## Downstream Use [Optional]\n \nMore information needed## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.# Training Details## Training Data\n \nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n \nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\n \n See the t5-base model card for further information.## Training Procedure### Preprocessing\n \nMore information needed### Speeds, Sizes, Times\n \nMore information needed# Evaluation## Testing Data, Factors & Metrics### Testing Data\n \nMore information needed### Factors### Metrics\n \nMore information needed## Results \n \nMore information needed# Model Examination\n \nMore information needed# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed# Technical Specifications [optional]## Model Architecture and Objective\n \nMore information needed## Compute Infrastructure\n \nMore information needed### Hardware\n \nMore information needed### Software\nMore information needed\n \nBibTeX:\n \n \n\n\n \n \nAPA:# Glossary [optional]\nMore information needed# More Information [optional]\n \nMore information needed# Model Card Authors [optional]\n \n \nKorea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team# Model Card Contact\n \nMore information needed# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] |
text2text-generation | transformers | # ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-base-newslike")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-base-newslike")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` | {"language": ["ko", "en"], "license": "apache-2.0", "tags": ["t5"], "eos_token": "</s>", "widget": [{"text": "\uc544\ubc84\uc9c0\uac00 \ubc29\uc5d0 \ub4e4\uc5b4\uac00\uc2e0\ub2e4.</s>"}]} | KETI-AIR/ke-t5-base-newslike | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ko",
"en"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # ke-t5 base
Pretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.
## How to use
## BibTeX entry and citation info
| [
"# ke-t5 base\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ke-t5 base\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
58,
29,
5,
9
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# ke-t5 base\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.## How to use## BibTeX entry and citation info"
] |
text2text-generation | transformers |
# Model Card for ke-t5-base
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Base is the checkpoint with 220 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
- **Shared by [Optional]:** Korea Electronics Technology Institute Artificial Intelligence Research Center
- **Model type:** Text Generation
- **Language(s) (NLP):**More information needed
- **License:** More information needed
- **Related Models:**
- **Parent Model:** T5
- **Resources for more information:**
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- [KE-T5 Github Repo](https://github.com/AIRC-KETI/ke-t5)
- [Paper](https://aclanthology.org/2021.findings-emnlp.33/)
- [Associated Paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
# Uses
## Direct Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
See the [t5-base model card](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) for further information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
### Factors
More information needed
### Metrics
More information needed
## Results
For full results for T5-Base, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
```
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Korea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("KETI-AIR/ke-t5-base")
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details>
| {"language": ["en", "ko"], "license": "apache-2.0", "tags": ["t5"], "eos_token": "</s>", "widget": [{"text": "\uc544\ubc84\uc9c0\uac00 \ubc29\uc5d0 \ub4e4\uc5b4\uac00\uc2e0\ub2e4.</s>"}]} | KETI-AIR/ke-t5-base | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"ko",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1910.09700"
] | [
"en",
"ko"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #en #ko #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for ke-t5-base
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) write:
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Base is the checkpoint with 220 million parameters.
- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
- Shared by [Optional]: Korea Electronics Technology Institute Artificial Intelligence Research Center
- Model type: Text Generation
- Language(s) (NLP):More information needed
- License: More information needed
- Related Models:
- Parent Model: T5
- Resources for more information:
- GitHub Repo
- KE-T5 Github Repo
- Paper
- Associated Paper
- Blog Post
# Uses
## Direct Use
The developers write in a blog post that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.
The model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).
See the t5-base model card for further information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The developers evaluated the model on 24 tasks, see the research paper for full details.
### Factors
More information needed
### Metrics
More information needed
## Results
For full results for T5-Base, see the research paper, Table 14.
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Google Cloud TPU Pods
- Hours used: More information needed
- Cloud Provider: GCP
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
BibTeX:
APA:
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Korea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
See the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.
</details>
| [
"# Model Card for ke-t5-base",
"# Model Details",
"## Model Description\n \nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n \n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n \nT5-Base is the checkpoint with 220 million parameters. \n \n \n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. \n- Shared by [Optional]: Korea Electronics Technology Institute Artificial Intelligence Research Center\n- Model type: Text Generation\n- Language(s) (NLP):More information needed\n- License: More information needed\n- Related Models: \n - Parent Model: T5\n- Resources for more information: \n - GitHub Repo\n - KE-T5 Github Repo\n - Paper\n - Associated Paper\n - Blog Post",
"# Uses",
"## Direct Use\n \nThe developers write in a blog post that the model: \n \n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n \nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\n \n See the t5-base model card for further information.",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"### Factors\nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nFor full results for T5-Base, see the research paper, Table 14.",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\nMore information needed\n \nBibTeX:\n\n\n\n\n \nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\n \nMore information needed",
"# Model Card Authors [optional]\n \n \nKorea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n \nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.\n</details>"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #en #ko #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for ke-t5-base",
"# Model Details",
"## Model Description\n \nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n \n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n \nT5-Base is the checkpoint with 220 million parameters. \n \n \n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. \n- Shared by [Optional]: Korea Electronics Technology Institute Artificial Intelligence Research Center\n- Model type: Text Generation\n- Language(s) (NLP):More information needed\n- License: More information needed\n- Related Models: \n - Parent Model: T5\n- Resources for more information: \n - GitHub Repo\n - KE-T5 Github Repo\n - Paper\n - Associated Paper\n - Blog Post",
"# Uses",
"## Direct Use\n \nThe developers write in a blog post that the model: \n \n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n \nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\n \n See the t5-base model card for further information.",
"## Training Procedure",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nThe developers evaluated the model on 24 tasks, see the research paper for full details.",
"### Factors\nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nFor full results for T5-Base, see the research paper, Table 14.",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\nMore information needed\n \nBibTeX:\n\n\n\n\n \nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.",
"# Glossary [optional]\nMore information needed",
"# More Information [optional]\n \nMore information needed",
"# Model Card Authors [optional]\n \n \nKorea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n \nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.\n</details>"
] | [
68,
10,
3,
241,
2,
95,
10,
25,
70,
33,
3,
82,
4,
10,
11,
2,
9,
22,
7,
8,
20,
6,
64,
6,
9,
7,
7,
100,
9,
9,
28,
7,
58
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #en #ko #arxiv-1910.09700 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for ke-t5-base# Model Details## Model Description\n \nThe developers of the Text-To-Text Transfer Transformer (T5) write: \n \n> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.\n \nT5-Base is the checkpoint with 220 million parameters. \n \n \n- Developed by: Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. \n- Shared by [Optional]: Korea Electronics Technology Institute Artificial Intelligence Research Center\n- Model type: Text Generation\n- Language(s) (NLP):More information needed\n- License: More information needed\n- Related Models: \n - Parent Model: T5\n- Resources for more information: \n - GitHub Repo\n - KE-T5 Github Repo\n - Paper\n - Associated Paper\n - Blog Post# Uses## Direct Use\n \nThe developers write in a blog post that the model: \n \n> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself## Downstream Use [Optional]\n \nMore information needed## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.# Training Details## Training Data\n \nThe model is pre-trained on the Colossal Clean Crawled Corpus (C4), which was developed and released in the context of the same research paper as T5.\n \nThe model was pre-trained on a on a multi-task mixture of unsupervised (1.) and supervised tasks (2.).\n \n See the t5-base model card for further information.## Training Procedure### Preprocessing\n \nMore information needed### Speeds, Sizes, Times\n \nMore information needed# Evaluation## Testing Data, Factors & Metrics### Testing Data\n \nThe developers evaluated the model on 24 tasks, see the research paper for full details.### Factors\nMore information needed### Metrics\n \nMore information needed## Results \n \nFor full results for T5-Base, see the research paper, Table 14.# Model Examination\n \nMore information needed# Environmental Impact\n \n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: Google Cloud TPU Pods\n- Hours used: More information needed\n- Cloud Provider: GCP\n- Compute Region: More information needed\n- Carbon Emitted: More information needed# Technical Specifications [optional]## Model Architecture and Objective\n \nMore information needed## Compute Infrastructure\n \nMore information needed### Hardware\n \nMore information needed### Software\nMore information needed\n \nBibTeX:\n\n\n\n\n \nAPA:\n- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.# Glossary [optional]\nMore information needed# More Information [optional]\n \nMore information needed# Model Card Authors [optional]\n \n \nKorea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team# Model Card Contact\n \nMore information needed# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n \nSee the Hugging Face T5 docs and a Colab Notebook created by the model developers for more examples.\n</details>"
] |
text2text-generation | transformers |
# ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-large-ko")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-large-ko")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` | {"language": "ko", "license": "apache-2.0", "tags": ["t5"], "eos_token": "</s>", "widget": [{"text": "\uc544\ubc84\uc9c0\uac00 \ubc29\uc5d0 \ub4e4\uc5b4\uac00\uc2e0\ub2e4.</s>"}]} | KETI-AIR/ke-t5-large-ko | null | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ko"
] | TAGS
#transformers #pytorch #tf #jax #t5 #text2text-generation #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ke-t5 base
Pretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.
## How to use
## BibTeX entry and citation info
| [
"# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
52,
29,
5,
9
] | [
"TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.## How to use## BibTeX entry and citation info"
] |
text2text-generation | transformers | # ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-large-newslike")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-large-newslike")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` | {"language": ["ko", "en"], "license": "apache-2.0", "tags": ["t5"], "eos_token": "</s>", "widget": [{"text": "\uc544\ubc84\uc9c0\uac00 \ubc29\uc5d0 \ub4e4\uc5b4\uac00\uc2e0\ub2e4.</s>"}]} | KETI-AIR/ke-t5-large-newslike | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ko",
"en"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # ke-t5 base
Pretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.
## How to use
## BibTeX entry and citation info
| [
"# ke-t5 base\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ke-t5 base\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
58,
29,
5,
9
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# ke-t5 base\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.## How to use## BibTeX entry and citation info"
] |
text2text-generation | transformers |
# ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-large")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-large")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` | {"language": ["en", "ko"], "license": "apache-2.0", "tags": ["t5"], "eos_token": "</s>", "widget": [{"text": "\uc544\ubc84\uc9c0\uac00 \ubc29\uc5d0 \ub4e4\uc5b4\uac00\uc2e0\ub2e4.</s>"}]} | KETI-AIR/ke-t5-large | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ko"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #en #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ke-t5 base
Pretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.
## How to use
## BibTeX entry and citation info
| [
"# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #en #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
58,
29,
5,
9
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #en #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.## How to use## BibTeX entry and citation info"
] |
text2text-generation | transformers |
# ke-t5 base
Pretrained T5 Model on Korean and English. See [Github](https://github.com/AIRC-KETI/ke-t5) and [Paper](https://aclanthology.org/2021.findings-emnlp.33/) [Korean paper](https://koreascience.kr/article/CFKO202130060717834.pdf) for more details.
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("KETI-AIR/ke-t5-small-ko")
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-small-ko")
```
## BibTeX entry and citation info
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
``` | {"language": "ko", "license": "apache-2.0", "tags": ["t5"], "eos_token": "</s>", "widget": [{"text": "\uc544\ubc84\uc9c0\uac00 \ubc29\uc5d0 \ub4e4\uc5b4\uac00\uc2e0\ub2e4.</s>"}]} | KETI-AIR/ke-t5-small-ko | null | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ko"
] | TAGS
#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ke-t5 base
Pretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.
## How to use
## BibTeX entry and citation info
| [
"# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.",
"## How to use",
"## BibTeX entry and citation info"
] | [
56,
29,
5,
9
] | [
"TAGS\n#transformers #pytorch #tf #jax #safetensors #t5 #text2text-generation #ko #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# ke-t5 base\n\nPretrained T5 Model on Korean and English. See Github and Paper Korean paper for more details.## How to use## BibTeX entry and citation info"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.