pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clm-total
This model is a fine-tuned version of [ckiplab/gpt2-base-chinese](https://huggingface.co/ckiplab/gpt2-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": ["zh"], "license": "gpl-3.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "clm-total", "results": []}]} | Littlemilk/autobiography-generator | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"zh",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# clm-total
This model is a fine-tuned version of ckiplab/gpt2-base-chinese on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cpu
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"# clm-total\n\nThis model is a fine-tuned version of ckiplab/gpt2-base-chinese on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.8586",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.1+cpu\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #zh #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# clm-total\n\nThis model is a fine-tuned version of ckiplab/gpt2-base-chinese on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.8586",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.1+cpu\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text-generation | transformers |
# Peter from Your Boyfriend Game.
| {"tags": ["conversational"]} | Lizardon/Peterbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Peter from Your Boyfriend Game.
| [
"# Peter from Your Boyfriend Game."
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Peter from Your Boyfriend Game."
] |
fill-mask | transformers |
# QuBERTa
QuBERTa es un modelo de lenguaje basado en RoBERTa para el quechua. Nuestro modelo de lenguaje fue pre-entrenado con 5M de tokens del quechua sureño (Collao y Chanka).
El modelo utiliza un tokenizador Byte-level BPE con un vocabulario de 52000 tokens de subpalabras.
## Usabilidad
Una vez descargado los pesos y el tokenizador es necesario adjuntarlo en un sola carpeta, en este caso fue `QuBERTa `.
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="./QuBERTa",
tokenizer="./QuBERTa"
)
```
Se hace la prueba, la cual esta en fases de mejoras.
```python
fill_mask("allinllachu <mask> allinlla huk wasipita.")
```
[{'score': 0.23992203176021576,
'sequence': 'allinllachu nisqaqa allinlla huk wasipita.',
'token': 334,
'token_str': ' nisqaqa'},
{'score': 0.061005301773548126,
'sequence': 'allinllachu, allinlla huk wasipita.',
'token': 16,
'token_str': ','},
{'score': 0.028720015659928322,
'sequence': "allinllachu' allinlla huk wasipita.",
'token': 11,
'token_str': "'"},
{'score': 0.012927944771945477,
'sequence': 'allinllachu kay allinlla huk wasipita.',
'token': 377,
'token_str': ' kay'},
{'score': 0.01230092253535986,
'sequence': 'allinllachu. allinlla huk wasipita.',
'token': 18,
'token_str': '.'}]
| {"language": ["qu"], "tags": ["Llamacha"]} | Llamacha/QuBERTa | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Llamacha",
"qu",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"qu"
] | TAGS
#transformers #pytorch #roberta #fill-mask #Llamacha #qu #autotrain_compatible #endpoints_compatible #region-us
|
# QuBERTa
QuBERTa es un modelo de lenguaje basado en RoBERTa para el quechua. Nuestro modelo de lenguaje fue pre-entrenado con 5M de tokens del quechua sureño (Collao y Chanka).
El modelo utiliza un tokenizador Byte-level BPE con un vocabulario de 52000 tokens de subpalabras.
## Usabilidad
Una vez descargado los pesos y el tokenizador es necesario adjuntarlo en un sola carpeta, en este caso fue 'QuBERTa '.
Se hace la prueba, la cual esta en fases de mejoras.
[{'score': 0.23992203176021576,
'sequence': 'allinllachu nisqaqa allinlla huk wasipita.',
'token': 334,
'token_str': ' nisqaqa'},
{'score': 0.061005301773548126,
'sequence': 'allinllachu, allinlla huk wasipita.',
'token': 16,
'token_str': ','},
{'score': 0.028720015659928322,
'sequence': "allinllachu' allinlla huk wasipita.",
'token': 11,
'token_str': "'"},
{'score': 0.012927944771945477,
'sequence': 'allinllachu kay allinlla huk wasipita.',
'token': 377,
'token_str': ' kay'},
{'score': 0.01230092253535986,
'sequence': 'allinllachu. allinlla huk wasipita.',
'token': 18,
'token_str': '.'}]
| [
"# QuBERTa \n\nQuBERTa es un modelo de lenguaje basado en RoBERTa para el quechua. Nuestro modelo de lenguaje fue pre-entrenado con 5M de tokens del quechua sureño (Collao y Chanka).\n\nEl modelo utiliza un tokenizador Byte-level BPE con un vocabulario de 52000 tokens de subpalabras.",
"## Usabilidad\nUna vez descargado los pesos y el tokenizador es necesario adjuntarlo en un sola carpeta, en este caso fue 'QuBERTa '.\n\n\nSe hace la prueba, la cual esta en fases de mejoras.\n\n\n [{'score': 0.23992203176021576,\n 'sequence': 'allinllachu nisqaqa allinlla huk wasipita.',\n 'token': 334,\n 'token_str': ' nisqaqa'},\n {'score': 0.061005301773548126,\n 'sequence': 'allinllachu, allinlla huk wasipita.',\n 'token': 16,\n 'token_str': ','},\n {'score': 0.028720015659928322,\n 'sequence': \"allinllachu' allinlla huk wasipita.\",\n 'token': 11,\n 'token_str': \"'\"},\n {'score': 0.012927944771945477,\n 'sequence': 'allinllachu kay allinlla huk wasipita.',\n 'token': 377,\n 'token_str': ' kay'},\n {'score': 0.01230092253535986,\n 'sequence': 'allinllachu. allinlla huk wasipita.',\n 'token': 18,\n 'token_str': '.'}]"
] | [
"TAGS\n#transformers #pytorch #roberta #fill-mask #Llamacha #qu #autotrain_compatible #endpoints_compatible #region-us \n",
"# QuBERTa \n\nQuBERTa es un modelo de lenguaje basado en RoBERTa para el quechua. Nuestro modelo de lenguaje fue pre-entrenado con 5M de tokens del quechua sureño (Collao y Chanka).\n\nEl modelo utiliza un tokenizador Byte-level BPE con un vocabulario de 52000 tokens de subpalabras.",
"## Usabilidad\nUna vez descargado los pesos y el tokenizador es necesario adjuntarlo en un sola carpeta, en este caso fue 'QuBERTa '.\n\n\nSe hace la prueba, la cual esta en fases de mejoras.\n\n\n [{'score': 0.23992203176021576,\n 'sequence': 'allinllachu nisqaqa allinlla huk wasipita.',\n 'token': 334,\n 'token_str': ' nisqaqa'},\n {'score': 0.061005301773548126,\n 'sequence': 'allinllachu, allinlla huk wasipita.',\n 'token': 16,\n 'token_str': ','},\n {'score': 0.028720015659928322,\n 'sequence': \"allinllachu' allinlla huk wasipita.\",\n 'token': 11,\n 'token_str': \"'\"},\n {'score': 0.012927944771945477,\n 'sequence': 'allinllachu kay allinlla huk wasipita.',\n 'token': 377,\n 'token_str': ' kay'},\n {'score': 0.01230092253535986,\n 'sequence': 'allinllachu. allinlla huk wasipita.',\n 'token': 18,\n 'token_str': '.'}]"
] |
null | null | This model is for anyone using using Flux.jl and looking for a test model to make sue of the Hugging Face hub. You can see the source code to generate this model below:
```Julia
julia> using Flux
julia> model = Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax)
Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax)
julia> using BSON: @save
julia> @save "mymodel.bson" model
```
you can then load the model in Julia as follows:
```Julia
julia> using Flux
julia> using BSON: @load
julia> @load "mymodel.bson" model
julia> model
Chain(Dense(10, 5, NNlib.relu), Dense(5, 2), NNlib.softmax)
```
See here: https://fluxml.ai/Flux.jl/stable/saving/#Saving-and-Loading-Models for more details! | {} | LoganKilpatrick/BasicFluxjlModel | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| This model is for anyone using using URL and looking for a test model to make sue of the Hugging Face hub. You can see the source code to generate this model below:
you can then load the model in Julia as follows:
See here: URL for more details! | [] | [
"TAGS\n#region-us \n"
] |
null | null | Aaaa | {} | Lolamarcon/Migo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Aaaa | [] | [
"TAGS\n#region-us \n"
] |
null | null | ## README | {} | Longines/test_repo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| ## README | [
"## README"
] | [
"TAGS\n#region-us \n",
"## README"
] |
text-generation | transformers |
# GePpeTto GPT2 Model 🇮🇹
Pretrained GPT2 117M model for Italian.
You can find further details in the paper:
Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: https://arxiv.org/abs/2004.14253
## Pretraining Corpus
The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019),
consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web
texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span,
with older texts than the Wikipedia dump (the latter stretches only to the late 2000s).
## Pretraining details
This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps.
Training parameters:
- GPT-2 small configuration
- vocabulary size: 30k
- Batch size: 32
- Block size: 100
- Adam Optimizer
- Initial learning rate: 5e-5
- Warm up steps: 10k
## Perplexity scores
| Domain | Perplexity |
|---|---|
| Wikipedia | 26.1052 |
| ItWac | 30.3965 |
| Legal | 37.2197 |
| News | 45.3859 |
| Social Media | 84.6408 |
For further details, qualitative analysis and human evaluation check out: https://arxiv.org/abs/2004.14253
## Load Pretrained Model
You can use this model by installing Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import GPT2Tokenizer, GPT2Model
model = GPT2Model.from_pretrained('LorenzoDeMattei/GePpeTto')
tokenizer = GPT2Tokenizer.from_pretrained(
'LorenzoDeMattei/GePpeTto',
)
```
## Example using GPT2LMHeadModel
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, GPT2Tokenizer
tokenizer = AutoTokenizer.from_pretrained("LorenzoDeMattei/GePpeTto")
model = AutoModelWithLMHead.from_pretrained("LorenzoDeMattei/GePpeTto")
text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
prompts = [
"Wikipedia Geppetto",
"Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso"]
samples_outputs = text_generator(
prompts,
do_sample=True,
max_length=50,
top_k=50,
top_p=0.95,
num_return_sequences=3
)
for i, sample_outputs in enumerate(samples_outputs):
print(100 * '-')
print("Prompt:", prompts[i])
for sample_output in sample_outputs:
print("Sample:", sample_output['generated_text'])
print()
```
Output is,
```
----------------------------------------------------------------------------------------------------
Prompt: Wikipedia Geppetto
Sample: Wikipedia Geppetto rosso (film 1920)
Geppetto rosso ("The Smokes in the Black") è un film muto del 1920 diretto da Henry H. Leonard.
Il film fu prodotto dalla Selig Poly
Sample: Wikipedia Geppetto
Geppetto ("Geppetto" in piemontese) è un comune italiano di 978 abitanti della provincia di Cuneo in Piemonte.
L'abitato, che si trova nel versante valtellinese, si sviluppa nella
Sample: Wikipedia Geppetto di Natale (romanzo)
Geppetto di Natale è un romanzo di Mario Caiano, pubblicato nel 2012.
----------------------------------------------------------------------------------------------------
Prompt: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso
Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso. Il burattino riesce a scappare. Dopo aver trovato un prezioso sacchetto si reca
Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso, e l'unico che lo possiede, ma, di fronte a tutte queste prove
Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso: - A voi gli occhi, le guance! A voi il mio pezzo!
```
## Citation
Please use the following bibtex entry:
```
@misc{mattei2020geppetto,
title={GePpeTto Carves Italian into a Language Model},
author={Lorenzo De Mattei and Michele Cafagna and Felice Dell'Orletta and Malvina Nissim and Marco Guerini},
year={2020},
eprint={2004.14253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## References
Marco Baroni, Silvia Bernardini, Adriano Ferraresi,
and Eros Zanchetta. 2009. The WaCky wide web: a
collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
| {"language": "it"} | LorenzoDeMattei/GePpeTto | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"it",
"arxiv:2004.14253",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2004.14253"
] | [
"it"
] | TAGS
#transformers #pytorch #jax #safetensors #gpt2 #text-generation #it #arxiv-2004.14253 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| GePpeTto GPT2 Model 🇮🇹
======================
Pretrained GPT2 117M model for Italian.
You can find further details in the paper:
Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: URL
Pretraining Corpus
------------------
The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019),
consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web
texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span,
with older texts than the Wikipedia dump (the latter stretches only to the late 2000s).
Pretraining details
-------------------
This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps.
Training parameters:
* GPT-2 small configuration
* vocabulary size: 30k
* Batch size: 32
* Block size: 100
* Adam Optimizer
* Initial learning rate: 5e-5
* Warm up steps: 10k
Perplexity scores
-----------------
For further details, qualitative analysis and human evaluation check out: URL
Load Pretrained Model
---------------------
You can use this model by installing Huggingface library 'transformers'. And you can use it directly by initializing it like this:
Example using GPT2LMHeadModel
-----------------------------
Output is,
Please use the following bibtex entry:
References
----------
Marco Baroni, Silvia Bernardini, Adriano Ferraresi,
and Eros Zanchetta. 2009. The WaCky wide web: a
collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
| [] | [
"TAGS\n#transformers #pytorch #jax #safetensors #gpt2 #text-generation #it #arxiv-2004.14253 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
image-classification | transformers |
# lawn-weeds
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### clover

#### dichondra

#### grass
 | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | LorenzoDeMattei/lawn-weeds | null | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# lawn-weeds
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### clover
!clover
#### dichondra
!dichondra
#### grass
!grass | [
"# lawn-weeds\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### clover\n\n!clover",
"#### dichondra\n\n!dichondra",
"#### grass\n\n!grass"
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# lawn-weeds\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### clover\n\n!clover",
"#### dichondra\n\n!dichondra",
"#### grass\n\n!grass"
] |
question-answering | transformers | ## AllenAI's <i>scibert_scivocab_uncased</i> fine-tuned on SQuAD 2.0 evaluated with F1 = 86.85
#### To load the model:
```
from transformers import BertTokenizerFast
from transformers import BertForQuestionAnswering
tokenizer = BertTokenizerFast.from_pretrained("LoudlySoft/scibert_scivocab_uncased_squad")
model = BertForQuestionAnswering.from_pretrained("LoudlySoft/scibert_scivocab_uncased_squad")
``` | {} | LoudlySoft/scibert_scivocab_uncased_squad | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #safetensors #bert #question-answering #endpoints_compatible #region-us
| ## AllenAI's <i>scibert_scivocab_uncased</i> fine-tuned on SQuAD 2.0 evaluated with F1 = 86.85
#### To load the model:
| [
"## AllenAI's <i>scibert_scivocab_uncased</i> fine-tuned on SQuAD 2.0 evaluated with F1 = 86.85",
"#### To load the model:"
] | [
"TAGS\n#transformers #pytorch #jax #safetensors #bert #question-answering #endpoints_compatible #region-us \n",
"## AllenAI's <i>scibert_scivocab_uncased</i> fine-tuned on SQuAD 2.0 evaluated with F1 = 86.85",
"#### To load the model:"
] |
text-generation | transformers |
# Aqua | {"tags": ["conversational"]} | Lovery/Aqua | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Aqua | [
"# Aqua"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Aqua"
] |
fill-mask | transformers | ```python
import jieba_fast
from transformers import BertTokenizer
from transformers import BigBirdModel
class JiebaTokenizer(BertTokenizer):
def __init__(
self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs
):
super().__init__(*args, **kwargs)
self.pre_tokenizer = pre_tokenizer
def _tokenize(self, text, *arg, **kwargs):
split_tokens = []
for word in self.pre_tokenizer(text):
if word in self.vocab:
split_tokens.append(word)
else:
split_tokens.extend(super()._tokenize(word))
return split_tokens
model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-base-4096')
tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-base-4096')
```
https://github.com/LowinLi/chinese-bigbird | {"language": ["zh"], "license": ["apache-2.0"]} | Lowin/chinese-bigbird-base-4096 | null | [
"transformers",
"pytorch",
"big_bird",
"fill-mask",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #big_bird #fill-mask #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
URL | [] | [
"TAGS\n#transformers #pytorch #big_bird #fill-mask #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | ```python
import jieba_fast
from transformers import BertTokenizer
from transformers import BigBirdModel
class JiebaTokenizer(BertTokenizer):
def __init__(
self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs
):
super().__init__(*args, **kwargs)
self.pre_tokenizer = pre_tokenizer
def _tokenize(self, text, *arg, **kwargs):
split_tokens = []
for text in self.pre_tokenizer(text):
if text in self.vocab:
split_tokens.append(text)
else:
split_tokens.extend(super()._tokenize(text))
return split_tokens
model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-mini-1024')
tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-mini-1024')
```
https://github.com/LowinLi/chinese-bigbird | {"language": ["zh"], "license": ["apache-2.0"]} | Lowin/chinese-bigbird-mini-1024 | null | [
"transformers",
"pytorch",
"big_bird",
"fill-mask",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #big_bird #fill-mask #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
URL | [] | [
"TAGS\n#transformers #pytorch #big_bird #fill-mask #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
feature-extraction | transformers | ```python
import jieba_fast
from transformers import BertTokenizer
from transformers import BigBirdModel
class JiebaTokenizer(BertTokenizer):
def __init__(
self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs
):
super().__init__(*args, **kwargs)
self.pre_tokenizer = pre_tokenizer
def _tokenize(self, text, *arg, **kwargs):
split_tokens = []
for text in self.pre_tokenizer(text):
if text in self.vocab:
split_tokens.append(text)
else:
split_tokens.extend(super()._tokenize(text))
return split_tokens
model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-small-1024')
tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-small-1024')
```
https://github.com/LowinLi/chinese-bigbird
| {"language": ["zh"], "license": ["apache-2.0"]} | Lowin/chinese-bigbird-small-1024 | null | [
"transformers",
"pytorch",
"big_bird",
"feature-extraction",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #big_bird #feature-extraction #zh #license-apache-2.0 #endpoints_compatible #region-us
|
URL
| [] | [
"TAGS\n#transformers #pytorch #big_bird #feature-extraction #zh #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
feature-extraction | transformers |
```python
import jieba_fast
from transformers import BertTokenizer
from transformers import BigBirdModel
class JiebaTokenizer(BertTokenizer):
def __init__(
self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs
):
super().__init__(*args, **kwargs)
self.pre_tokenizer = pre_tokenizer
def _tokenize(self, text, *arg, **kwargs):
split_tokens = []
for text in self.pre_tokenizer(text):
if text in self.vocab:
split_tokens.append(text)
else:
split_tokens.extend(super()._tokenize(text))
return split_tokens
model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-tiny-1024')
tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-tiny-1024')
```
https://github.com/LowinLi/chinese-bigbird | {"language": ["zh"], "license": ["apache-2.0"]} | Lowin/chinese-bigbird-tiny-1024 | null | [
"transformers",
"pytorch",
"big_bird",
"feature-extraction",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #big_bird #feature-extraction #zh #license-apache-2.0 #endpoints_compatible #region-us
|
URL | [] | [
"TAGS\n#transformers #pytorch #big_bird #feature-extraction #zh #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
fill-mask | transformers | ```python
from transformers import BertTokenizer
from transformers import BigBirdModel
model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-wwm-base-4096')
tokenizer = BertTokenizer.from_pretrained('Lowin/chinese-bigbird-wwm-base-4096')
```
https://github.com/LowinLi/chinese-bigbird | {"language": ["zh"], "license": ["apache-2.0"]} | Lowin/chinese-bigbird-wwm-base-4096 | null | [
"transformers",
"pytorch",
"big_bird",
"fill-mask",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"zh"
] | TAGS
#transformers #pytorch #big_bird #fill-mask #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
URL | [] | [
"TAGS\n#transformers #pytorch #big_bird #fill-mask #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null | First-try | {} | LucasLi/Transformer | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| First-try | [] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers | # XiaoBot for Discord
[Tutorial](https://youtu.be/UjDpW_SOrlw) followed for this model. | {"tags": ["conversational"]} | Lucdi90/DialoGPT-medium-XiaoBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # XiaoBot for Discord
Tutorial followed for this model. | [
"# XiaoBot for Discord\nTutorial followed for this model."
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# XiaoBot for Discord\nTutorial followed for this model."
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased-finetuned-peticoes
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 215 | 1.1349 |
| No log | 2.0 | 430 | 1.0925 |
| 1.3219 | 3.0 | 645 | 1.0946 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "widget": [{"text": "Com efeito, se tal fosse poss\u00edvel, o Poder [MASK] \u2013 que n\u00e3o disp\u00f5e de fun\u00e7\u00e3o legislativa \u2013 passaria a desempenhar atribui\u00e7\u00e3o que lhe \u00e9 institucionalmente estranha (a de legislador positivo), usurpando, desse modo, no contexto de um sistema de poderes essencialmente limitados, compet\u00eancia que n\u00e3o lhe pertence, com evidente transgress\u00e3o ao princ\u00edpio constitucional da separa\u00e7\u00e3o de poderes."}], "model-index": [{"name": "bert-base-portuguese-cased-finetuned-peticoes", "results": []}]} | Luciano/bert-base-portuguese-cased-finetuned-peticoes | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"pt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #pt #license-mit #autotrain_compatible #endpoints_compatible #region-us
| bert-base-portuguese-cased-finetuned-peticoes
=============================================
This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0878
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #pt #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-portuguese-cased-finetuned-tcu-acordaos
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7308 | 1.0 | 1383 | 0.6286 |
| 0.6406 | 2.0 | 2766 | 0.5947 |
| 0.6033 | 3.0 | 4149 | 0.5881 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.2
- Tokenizers 0.10.3
| {"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "widget": [{"text": "Com efeito, se tal fosse poss\u00edvel, o Poder [MASK] \u2013 que n\u00e3o disp\u00f5e de fun\u00e7\u00e3o legislativa \u2013 passaria a desempenhar atribui\u00e7\u00e3o que lhe \u00e9 institucionalmente estranha (a de legislador positivo), usurpando, desse modo, no contexto de um sistema de poderes essencialmente limitados, compet\u00eancia que n\u00e3o lhe pertence, com evidente transgress\u00e3o ao princ\u00edpio constitucional da separa\u00e7\u00e3o de poderes."}], "model-index": [{"name": "bert-base-portuguese-cased-finetuned-tcu-acordaos", "results": []}]} | Luciano/bert-base-portuguese-cased-finetuned-tcu-acordaos | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"pt",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #pt #license-mit #autotrain_compatible #endpoints_compatible #region-us
| bert-base-portuguese-cased-finetuned-tcu-acordaos
=================================================
This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5765
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #pt #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.2\n* Tokenizers 0.10.3"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertimbau-base-lener_br
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the lener_br dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2298
- Precision: 0.8501
- Recall: 0.9138
- F1: 0.8808
- Accuracy: 0.9693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0686 | 1.0 | 1957 | 0.1399 | 0.7759 | 0.8669 | 0.8189 | 0.9641 |
| 0.0437 | 2.0 | 3914 | 0.1457 | 0.7997 | 0.8938 | 0.8441 | 0.9623 |
| 0.0313 | 3.0 | 5871 | 0.1675 | 0.8466 | 0.8744 | 0.8603 | 0.9651 |
| 0.0201 | 4.0 | 7828 | 0.1621 | 0.8713 | 0.8839 | 0.8775 | 0.9718 |
| 0.0137 | 5.0 | 9785 | 0.1811 | 0.7783 | 0.9159 | 0.8415 | 0.9645 |
| 0.0105 | 6.0 | 11742 | 0.1836 | 0.8568 | 0.9009 | 0.8783 | 0.9692 |
| 0.0105 | 7.0 | 13699 | 0.1649 | 0.8339 | 0.9125 | 0.8714 | 0.9725 |
| 0.0059 | 8.0 | 15656 | 0.2298 | 0.8501 | 0.9138 | 0.8808 | 0.9693 |
| 0.0051 | 9.0 | 17613 | 0.2210 | 0.8437 | 0.9045 | 0.8731 | 0.9693 |
| 0.0061 | 10.0 | 19570 | 0.2499 | 0.8627 | 0.8946 | 0.8784 | 0.9681 |
| 0.0041 | 11.0 | 21527 | 0.1985 | 0.8560 | 0.9052 | 0.8799 | 0.9720 |
| 0.003 | 12.0 | 23484 | 0.2204 | 0.8498 | 0.9065 | 0.8772 | 0.9699 |
| 0.0014 | 13.0 | 25441 | 0.2152 | 0.8425 | 0.9067 | 0.8734 | 0.9709 |
| 0.0005 | 14.0 | 27398 | 0.2317 | 0.8553 | 0.8987 | 0.8765 | 0.9705 |
| 0.0015 | 15.0 | 29355 | 0.2436 | 0.8543 | 0.8989 | 0.8760 | 0.9700 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
| {"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["lener_br"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bertimbau-base-lener_br", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "args": "lener_br"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9692504609383333}}]}], "base_model": "neuralmind/bert-base-portuguese-cased", "model-index": [{"name": "Luciano/bertimbau-base-lener_br", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9824282794418222, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDZiZTRmMzRiZDFjOGMzZTM3ODRmNTEwNjI5OTM2ZDhlZjViMDk0YmJjOWViYjM3YmJmZGI2MjJiOTI3OGNmZCIsInZlcnNpb24iOjF9.7DVb3B_moqPXev5yxjcCvBCZDcJdmm3qZsSrp-RVOggLEr_AUfkBrF_76tDVLs9DszD1AlW4ERXcc0ZCqSCaDw"}, {"type": "precision", "value": 0.9877557596262284, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTE2MGQ4ZGM1NTEwOGFmMjM3ODAyYTg3MWM1YjVhZGVlYThiNzFjYTE4NWJhOTU3OWZjMjhkODcwNGNiMmIxMyIsInZlcnNpb24iOjF9.G1e_jAOIDcuaOXWNjeRqlHTqJHVc_akZavhyvgBkAPiCTRgoTR24OUu9e_izofDMSTo4xhkMIwsC_O9tKzkNCA"}, {"type": "recall", "value": 0.9870401674313772, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTkyZjEwMzk2NTBjY2RhMWVhYWVkOWQ2ZThkZDMwODczMDVkNDI2ZjM3OTA1ODg5NGQyYWUxMGQ5MDRkNjNlNiIsInZlcnNpb24iOjF9.qDL8618-ZTT_iO-eppn7JzVVfd_ayuj4mTT7eIc3zFYKJUp4KNpFgxnjuSVEZTcdOG48YrSISXJoHM5jVXg_DA"}, {"type": "f1", "value": 0.9873978338768773, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjYwOWZkZmFiMTRjY2UyOTJmMDNjMzkzNjUxYTAzYzM2ZDNkMmU0NTQ5NDlmMzU5YWExMDNiZjUzOGVlZjc1OSIsInZlcnNpb24iOjF9.T7MDH4H4E6eiLZot4W_tNzVgi-ctOrSb148x9WttkJFaxh-2P4kNmm4bKJhF1ZZZKgja80hKp_Nm9dmqXU7gAg"}, {"type": "loss", "value": 0.11542011797428131, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDA3OGRkY2Q2MjlkZWZlZTVhZDk0MjY3MDA0MzgwZjI4MTk3Y2Q2ZmRkMGI3OTQwMzcyMzVjMGE5MzU4ODY5MiIsInZlcnNpb24iOjF9.nHtVSN-vvFjDRCWC5dXPf8dmk9Rrj-JNqvehDSGCAGLl3WknpwNHzCrJM9sNlRiNgwEIA4ekBHOC_V_OHhp7Bw"}]}, {"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "validation"}, "metrics": [{"type": "accuracy", "value": 0.9692504609383333, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjY2N2VkZTIyMWM2ZTUxYzFiNjFhNzgwODgzNDQxNTMwODczMThjZDE5MzE3MTllN2ZlNjc4OWI0YTY0NzJkNCIsInZlcnNpb24iOjF9._atPyYtbN7AmDCZHNQHeBDFolzgKbQ04C1c1gfNBomkxlLXiZUVDSPwCNP9fveXhnXwkDsoy3hfm44BTsHtBAw"}, {"type": "precision", "value": 0.9786866842043531, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGQzMjM1M2U2MzZiZjJmNGQ1NmUxNjE0NWYyOWJkNGM3NmE0NDg2MjAwZGNkNGZmZDEwMjkwZGQ1MDgyMWU3ZSIsInZlcnNpb24iOjF9.1XNuw2s47lqZD-ywmdEcI6UpPyl_aR-8cxlU1laQYEsUNW1fEZwB90sr7cSbNNTndzEsuH9VzeKgHwlHarq7Dg"}, {"type": "recall", "value": 0.9840619998315222, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjllM2VlZTI5NzZlNGFhMjIyN2ZmYmQzNzQ2NDYxZWNkMzY5NzM0YTY3MDE2OTMxMjdiYzkwNjc1ZjBkNDRjYSIsInZlcnNpb24iOjF9.C7SeMwbtrmD24YWsYsxi4RRaVSsuQU-Rj83b-vZ8_H1IggmyNMpv8Y2z1mDh6b5UgaHpuk9YQb9aRKbQuCjTCA"}, {"type": "f1", "value": 0.9813669814173863, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDZjNjNiZjRhNThhNzBiMDNmODIyOTM0YjEwNWVhZTQ5MWRiYzU2ZjBkOGY3NzgzOGE2ZTJkOTNhZWZlMzgxYyIsInZlcnNpb24iOjF9.YDySY0KSF3PieEXXjx1y6GsXr9PQVNF1RW_zAQNTPcbgU8OEwyts_tUXFIT61QVGVchFOG4bLFs0ggOuwvZKBA"}, {"type": "loss", "value": 0.22302456200122833, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzFhNTFiYzE1ZjY4MmRjMTI5NGY2YWEyYzY4NzBkYTVjMTk0MWVkODBhY2M0NWQ0ZjM1MmVjZTRmM2RhOTUxZiIsInZlcnNpb24iOjF9.-AXmb23GEbxQ282y9wL-Xvv5cZg0Z3SGQQks5As_BrXlCf8ay8sgd1VWEB4NTepn8MnKJgJkqyQK4JXxSSYCCQ"}]}, {"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "train"}, "metrics": [{"type": "accuracy", "value": 0.9990127507699392, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODEwMWUyNjU0ZjUyODQ2ZjQ3Y2VjOWY5YWNmZDczMDhhYzZiY2ZjMTFmZTUyZDZhOWJhMjcwMWJlZWNmMDIwOSIsInZlcnNpb24iOjF9.acwBn2no3TJ2cMGaGbQlNn9smS9XTsfKUat5JsKUVHTJa4H6okb5W6Va67KkrT383paAHOkoipb1wJwWfsseCg"}, {"type": "precision", "value": 0.9992300721767728, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmQyNDJhNTgzNjc4OWQ5ODcwN2RjM2JhZmNjODljZjIyYWI3MGIyOGNiYWYxNzczNDQyNTZjMDhiODYyYWRiMyIsInZlcnNpb24iOjF9.Z_W8fuCgV5KWChMZXaoJtX-u-SxBd8GcfVXBjFnf7BYqrWoTkcczJqJP1g74Gjrp6xp_VatQ-V1Por5Yzd3dCQ"}, {"type": "recall", "value": 0.9993028952029684, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2ZiMjE4NDE0NmI1NjVhNzIyYjJjMTUyZDU2OGY3NTgyYTNhZDBjNWMzYWZmMmI5ZjczZjgyYmZjOGM0YTcyMiIsInZlcnNpb24iOjF9.jB5kEKsJMs40YVJ0RmFENEbKINKreAJN-EYeRrQMCwOrfTXxyxq0-cwgF_T2UJ1vl4eL-MAV2Lc3p449gaDUCg"}, {"type": "f1", "value": 0.9992664823630992, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTQzMWRkZjIyNDY1NzU2NDNmNWJlMDIxOTY4Y2UyYjJlOTVkNTEwZGEwODdjZDMwYTg5ODE3NTlhN2JjMjZlZCIsInZlcnNpb24iOjF9.DspzVgqZh5jbRfx-89Ygh7dbbPBsiLyOostyQ4el1SIoGVRtEfxzYk780hEIRqqagWk63DXY3_eLIRyiBFf8BQ"}, {"type": "loss", "value": 0.0035279043950140476, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGQ1OWQxNjNmYzNlMzliODljNTY2YWNhMTUzNjVkMzA0NDYzZWY0ODFiMDlmZWZhNDlkODEyYWU5OWY3YjQyOSIsInZlcnNpb24iOjF9.6S7KwMDEBMWG95o3M0kOnKofgVnPwX8Sf2bQiXns-kZkcrOTXJCq7czloDbSk9d9-sumdxXYk9-oQFDfR6DTAw"}]}]}]} | Luciano/bertimbau-base-lener_br | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"pt",
"dataset:lener_br",
"base_model:neuralmind/bert-base-portuguese-cased",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #bert #token-classification #generated_from_trainer #pt #dataset-lener_br #base_model-neuralmind/bert-base-portuguese-cased #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
| bertimbau-base-lener\_br
========================
This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on the lener\_br dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2298
* Precision: 0.8501
* Recall: 0.9138
* F1: 0.8808
* Accuracy: 0.9693
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.8.2
* Pytorch 1.9.0+cu102
* Datasets 1.9.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #token-classification #generated_from_trainer #pt #dataset-lener_br #base_model-neuralmind/bert-base-portuguese-cased #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.0\n* Tokenizers 0.10.3"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertimbau-large-lener_br
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the lener_br dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1271
- Precision: 0.8965
- Recall: 0.9198
- F1: 0.9080
- Accuracy: 0.9801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0674 | 1.0 | 1957 | 0.1349 | 0.7617 | 0.8710 | 0.8127 | 0.9594 |
| 0.0443 | 2.0 | 3914 | 0.1867 | 0.6862 | 0.9194 | 0.7858 | 0.9575 |
| 0.0283 | 3.0 | 5871 | 0.1185 | 0.8206 | 0.8766 | 0.8477 | 0.9678 |
| 0.0226 | 4.0 | 7828 | 0.1405 | 0.8072 | 0.8978 | 0.8501 | 0.9708 |
| 0.0141 | 5.0 | 9785 | 0.1898 | 0.7224 | 0.9194 | 0.8090 | 0.9629 |
| 0.01 | 6.0 | 11742 | 0.1655 | 0.9062 | 0.8856 | 0.8958 | 0.9741 |
| 0.012 | 7.0 | 13699 | 0.1271 | 0.8965 | 0.9198 | 0.9080 | 0.9801 |
| 0.0091 | 8.0 | 15656 | 0.1919 | 0.8890 | 0.8886 | 0.8888 | 0.9719 |
| 0.0042 | 9.0 | 17613 | 0.1725 | 0.8977 | 0.8985 | 0.8981 | 0.9744 |
| 0.0043 | 10.0 | 19570 | 0.1530 | 0.8878 | 0.9034 | 0.8955 | 0.9761 |
| 0.0042 | 11.0 | 21527 | 0.1635 | 0.8792 | 0.9108 | 0.8947 | 0.9774 |
| 0.0033 | 12.0 | 23484 | 0.2009 | 0.8155 | 0.9138 | 0.8619 | 0.9719 |
| 0.0008 | 13.0 | 25441 | 0.1766 | 0.8737 | 0.9135 | 0.8932 | 0.9755 |
| 0.0005 | 14.0 | 27398 | 0.1868 | 0.8616 | 0.9129 | 0.8865 | 0.9743 |
| 0.0014 | 15.0 | 29355 | 0.1910 | 0.8694 | 0.9101 | 0.8893 | 0.9746 |
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
| {"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["lener_br"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bertimbau-large-lener_br", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "args": "lener_br"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9801301293674859}}]}], "base_model": "neuralmind/bert-large-portuguese-cased", "model-index": [{"name": "Luciano/bertimbau-large-lener_br", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9840898731012984, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTcwYjYxOGIzOGEwNjc4NzdkZjJjNGJhYTkzOTY4NmM5MWU0YjIxN2EwNmI4M2E0ZDkwYjUzYTk1NzYwOWYwNyIsInZlcnNpb24iOjF9.AZ4Xkl2_oUMeUxmB-Me7pdDwvQj6Y-6W2KvH6_5mkKuVnT551ffAtBbj8H9ruDvqE4aTlIT0eqrkgHUgcHP1Bg"}, {"type": "precision", "value": 0.9895415357344292, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTBhMjRmNDZlMGRiZDJhNjg0ZWVhNzQzMzYzMTQ4MDY2ODEwNzcwYTgwYmEyZDExZmI0OWQ0N2Q5NzdjZDM2OCIsInZlcnNpb24iOjF9.50xubvWSuT0EDjsj-Ox0dFvsmsFQhCDojB15PzynBJBd2PsLOG2eKqWdFYV1iXNnOTum3xCFGKKSE8dvyK6GBQ"}, {"type": "recall", "value": 0.9885856878370763, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTA4NzRkMzIwYzdhNmRlODg1YjI3MzA5NmQ5Yjk3NzMzZmQ4MDJjMWRlYzQ1NWNkZjA0MGQ2OTBiMWVlYjdiOCIsInZlcnNpb24iOjF9.5L9WHAEZIiM_rXqIu2kEVU-7Hed3oEi5IO_ulcEDJO-r4KQVXS9X4Rat5FSAjdWSRV_vnvM9Nc7LiOh738WzBA"}, {"type": "f1", "value": 0.9890633808488363, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjIzYzllZWFjZmExN2Q2NDM4ZWY3YjMxZDNiZWFjNzU0ODcwYTBkNTU0ZWExYzM3YjI2MjQ4MTMxOTM5ODdhMyIsInZlcnNpb24iOjF9.tTxenqEcrfQMSbo53mewRPc4oDectJEKfzZyj_mChtQ-K41miMd1n_gNCT-zdT3u1wb5cc7nwgP-Mggo4Q6MAQ"}, {"type": "loss", "value": 0.10151929408311844, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmZkM2YzZmJmOGY0MDI0YzI0ZGQyYWM0YTU1YWQ3NDI3M2UxZjU3NjM0MzljODMwMTAyYzU4YWNmZTRhNGM3ZSIsInZlcnNpb24iOjF9.dF2SD2-HEHepUpbmgrndTM42MQ1mtMuuTgwqyv0cO_ZHlqRRQfyZtgLMlf8_5DwpPRKw_F3wwXLRETbL-5LJCw"}]}, {"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "validation"}, "metrics": [{"type": "accuracy", "value": 0.9801301293674859, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWY1M2Q5YzIxYzQ3NTU5YzQyMjUwNWY3MWNkMjJlMGM2YzkwMTdhZGM3NmYxZmVjZDc1N2NkMjBhNDEwMzIyOCIsInZlcnNpb24iOjF9.Mtp2ZBdksTfCQJEFiyLt4pILPH7RE8CXodYNcL8ydc7lTTwn5PiGdnglA7GJcd9HqxOU8UsVyaGzxFkjZGkGDw"}, {"type": "precision", "value": 0.9864285473144053, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzc1M2NjNTFhNjZiNDU5NzQyZDYzOWViNGFhNzdlMGU4ODNhNDMxMWE1ZjIwZGIzOTIxNDAxZDcwNDM2MGNjYiIsInZlcnNpb24iOjF9.59674wBNKLrL5DC1vfPdEzpCiXRnhilpvnylmzkvLmBrJrZdy-rTP4AXir62BoUEyEZ6zMPRRNOYI9fduwfnBQ"}, {"type": "recall", "value": 0.9845505854603656, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDc4YjVlYmQ1ZjllNzU3M2ZkN2QxNzI1MGZhMzhkMDNmMjNjODM3NGMzYzY2OGM1NGJmMDA4ZGUwM2RkMGY5MyIsInZlcnNpb24iOjF9.tYvf8mJ0XUmH3mZ0NIMdrXY5a93-2H9u5Ak6heCMBpmHhvgL8k_9y25cRmLeWoh9apsCIS6lQDpHlsJBXdhGDg"}, {"type": "f1", "value": 0.9854886717201953, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGY4YmJjYzkyNzU1ZDQ3MWFmZTY4MWU1OTg4NTRmOTIwM2I3NzdkYWI2YmNlYjdjODQyMmE2N2M5MDQ5MDEyYiIsInZlcnNpb24iOjF9.FxRrhWWfyA-oIXb5zzHO3-VboU6iFcnRc_kVPgLaOcyk8p5jIfV-egDHrql6e-h-6iS8xTDFV8fxIoq-kboRDQ"}, {"type": "loss", "value": 0.11984097212553024, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGE2NzM4MjE1MmU1ZTU4ZTU1NjAyYzk2YzdlNTUxOTAyZjdiMTkxYmZlMzExYmUwOTRhMTA3NzcwYWM2NzgxMiIsInZlcnNpb24iOjF9.PAlnc-tkJ7DEp9-qIR7KpYK9Yzy-umlhwKMH8bq1p-Gxf5pSIL_AtG8eP-JrbH71pJLYaBxSeeRHXWhIT-jBBA"}]}, {"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "lener_br", "type": "lener_br", "config": "lener_br", "split": "train"}, "metrics": [{"type": "accuracy", "value": 0.9989004979420315, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTMwYWI4ZDdiZmNkYWYzNDNhZWI4MmNhNDE5MjRmMjRjYTZjYjI1YTllMzMyMDMxMTBmN2YwN2QxMmE3Y2ViYyIsInZlcnNpb24iOjF9.yihlFpU8AYKMsa4f_7P2J-JYifENGVm0nXGwKcvOV_axvv-Gj-Q-E93j0sHnb3TXTpTlJBgQ0ckBDh4Sohq3AQ"}, {"type": "precision", "value": 0.9991129612205654, "name": "Precision", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzM3MTQ3ODU3MzBiY2RmNGVhMmQ2YTUzODlkZTk1M2EyOGU4Y2I5ZDI0ZGI5YWQ1YWQ4NDE2NGI1ZjYxNTM1YSIsInZlcnNpb24iOjF9.nnTSkmuvHdYFhXUofIEtjIaEveJCBlMrlmwSwRLojcXYvoaZWNFkWI8wSkQP0iDdDhKuEaZYkRc4kJ-Xd4_TCw"}, {"type": "recall", "value": 0.9993219071519783, "name": "Recall", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTA1NGMzOGMwMWQ3Yzk0ZmY4YmYxZjVjODQwMDA1ZjgxNjQ2Y2IxMmIxYWJjOTJhOGQ2NjRlOTRjOTkzYjkwMyIsInZlcnNpb24iOjF9.2YuShB7RWqO6WeR9RCePUcDPv-Ho-6pYeFXmmnnYmW88BRN5jHSrJTWPXMxigVRPBHjU5LlE8j2EK3-IsNviCQ"}, {"type": "f1", "value": 0.9992174232631231, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTE2YmMzMTI3MzQ5MTRmZGQ3NTdhODc3ZGI0MjIyOWMzZTc1MGQ4ZjVkY2JhYjYyM2I1NmI2MWI1OTZkYjViMyIsInZlcnNpb24iOjF9.TJkpCVwoTHFSwD8ckgn1dvD-H5HscuFmtsjEFYNVDZPnfm2PN7b45vZxNvWiK7L6ZVFW2fXbwgNJmMapuoeMCw"}, {"type": "loss", "value": 0.0037613145541399717, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmUxYWU2ODFkOTQ4NjIyODQ1NTU0NDQ2ZjhmYjExZmE3ZDNkZDBjNmIwY2JlNGRlNGZhOGExMDQ1MjA5Nzk0MiIsInZlcnNpb24iOjF9.ES0Kzjz3vvY5HedqYQzZafOPzQSbdWIbsdmft136SqIwb_-rZe-qQ38lveUYuUArP7NHk0wgo3NIkC6LqIsVAw"}]}]}]} | Luciano/bertimbau-large-lener_br | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"pt",
"dataset:lener_br",
"base_model:neuralmind/bert-large-portuguese-cased",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #bert #token-classification #generated_from_trainer #pt #dataset-lener_br #base_model-neuralmind/bert-large-portuguese-cased #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
| bertimbau-large-lener\_br
=========================
This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on the lener\_br dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1271
* Precision: 0.8965
* Recall: 0.9198
* F1: 0.9080
* Accuracy: 0.9801
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 15
### Training results
### Framework versions
* Transformers 4.8.2
* Pytorch 1.9.0+cu102
* Datasets 1.9.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #token-classification #generated_from_trainer #pt #dataset-lener_br #base_model-neuralmind/bert-large-portuguese-cased #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.8.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.9.0\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-portuguese-finetuned-peticoes
This model is a fine-tuned version of [pierreguillou/gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 404 | 3.5455 |
| 3.8364 | 2.0 | 808 | 3.4326 |
| 3.4816 | 3.0 | 1212 | 3.4062 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "base_model": "pierreguillou/gpt2-small-portuguese", "model-index": [{"name": "gpt2-small-portuguese-finetuned-peticoes", "results": []}]} | Luciano/gpt2-small-portuguese-finetuned-peticoes | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"pt",
"base_model:pierreguillou/gpt2-small-portuguese",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #pt #base_model-pierreguillou/gpt2-small-portuguese #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| gpt2-small-portuguese-finetuned-peticoes
========================================
This model is a fine-tuned version of pierreguillou/gpt2-small-portuguese on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.4062
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #pt #base_model-pierreguillou/gpt2-small-portuguese #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-portuguese-finetuned-tcu-acordaos
This model is a fine-tuned version of [pierreguillou/gpt2-small-portuguese](https://huggingface.co/pierreguillou/gpt2-small-portuguese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6841
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3435 | 1.0 | 658 | 1.8346 |
| 1.8668 | 2.0 | 1316 | 1.7141 |
| 1.7573 | 3.0 | 1974 | 1.6841 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"language": ["pt"], "license": "mit", "tags": ["generated_from_trainer"], "base_model": "pierreguillou/gpt2-small-portuguese", "model-index": [{"name": "gpt2-small-portuguese-finetuned-tcu-acordaos", "results": []}]} | Luciano/gpt2-small-portuguese-finetuned-tcu-acordaos | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"pt",
"base_model:pierreguillou/gpt2-small-portuguese",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"pt"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #pt #base_model-pierreguillou/gpt2-small-portuguese #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| gpt2-small-portuguese-finetuned-tcu-acordaos
============================================
This model is a fine-tuned version of pierreguillou/gpt2-small-portuguese on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6841
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #pt #base_model-pierreguillou/gpt2-small-portuguese #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
# Jake Peralta B99 DialoGPT Model | {"tags": ["conversational"]} | LuckyWill/DialoGPT-small-JakeBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Jake Peralta B99 DialoGPT Model | [
"# Jake Peralta B99 DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Jake Peralta B99 DialoGPT Model"
] |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-Spanish
Added custom language model to https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [ASRecognition](https://github.com/jonatasgrosman/asrecognition) library:
```python
from asrecognition import ASREngine
asr = ASREngine("es", model_path="jonatasgrosman/wav2vec2-large-xlsr-53-spanish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = asr.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "es"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-spanish"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| HABITA EN AGUAS POCO PROFUNDAS Y ROCOSAS. | HABITAN AGUAS POCO PROFUNDAS Y ROCOSAS |
| OPERA PRINCIPALMENTE VUELOS DE CABOTAJE Y REGIONALES DE CARGA. | OPERA PRINCIPALMENTE VUELO DE CARBOTAJES Y REGIONALES DE CARGAN |
| PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN. | PARA VISITAR CONTACTAR PRIMERO CON LA DIRECCIÓN |
| TRES | TRES |
| REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA, PARA CONTINUAR LUEGO EN ESPAÑA. | REALIZÓ LOS ESTUDIOS PRIMARIOS EN FRANCIA PARA CONTINUAR LUEGO EN ESPAÑA |
| EN LOS AÑOS QUE SIGUIERON, ESTE TRABAJO ESPARTA PRODUJO DOCENAS DE BUENOS JUGADORES. | EN LOS AÑOS QUE SIGUIERON ESTE TRABAJO ESPARTA PRODUJO DOCENA DE BUENOS JUGADORES |
| SE ESTÁ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS. | SE ESTÓ TRATANDO DE RECUPERAR SU CULTIVO EN LAS ISLAS CANARIAS |
| SÍ | SÍ |
| "FUE ""SACADA"" DE LA SERIE EN EL EPISODIO ""LEAD"", EN QUE ALEXANDRA CABOT REGRESÓ." | FUE SACADA DE LA SERIE EN EL EPISODIO LEED EN QUE ALEXANDRA KAOT REGRESÓ |
| SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOKA, EN LA PROVINCIA DE BIOKO SUR. | SE UBICAN ESPECÍFICAMENTE EN EL VALLE DE MOCA EN LA PROVINCIA DE PÍOCOSUR |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset mozilla-foundation/common_voice_6_0 --config es --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-spanish --dataset speech-recognition-community-v2/dev_data --config es --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021wav2vec2-large-xlsr-53-spanish,
title={XLSR Wav2Vec2 Spanish by Jonatas Grosman},
author={Grosman, Jonatas},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish}},
year={2021}
}
``` | {"language": "es", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "es", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", "mozilla-foundation/common_voice_6_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLSR Wav2Vec2 Spanish by Jonatas Grosman", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice es", "type": "common_voice", "args": "es"}, "metrics": [{"type": "wer", "value": 8.82, "name": "Test WER"}, {"type": "cer", "value": 2.58, "name": "Test CER"}, {"type": "wer", "value": 6.27, "name": "Test WER (+LM)"}, {"type": "cer", "value": 2.06, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "es"}, "metrics": [{"type": "wer", "value": 30.19, "name": "Dev WER"}, {"type": "cer", "value": 13.56, "name": "Dev CER"}, {"type": "wer", "value": 24.71, "name": "Dev WER (+LM)"}, {"type": "cer", "value": 12.61, "name": "Dev CER (+LM)"}]}]}]} | LuisG07/wav2vec2-large-xlsr-53-spanish | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"es",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_6_0",
"robust-speech-event",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_6_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #es #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Wav2Vec2-Large-XLSR-53-Spanish
==============================
Added custom language model to URL
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Spanish using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the OVHcloud :)
The script used for training can be found here: URL
Usage
-----
The model can be used directly (without a language model) as follows...
Using the ASRecognition library:
Writing your own inference script:
Evaluation
----------
1. To evaluate on 'mozilla-foundation/common\_voice\_6\_0' with split 'test'
2. To evaluate on 'speech-recognition-community-v2/dev\_data'
If you want to cite this model you can use this:
| [] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #es #hf-asr-leaderboard #mozilla-foundation/common_voice_6_0 #robust-speech-event #speech #xlsr-fine-tuning-week #dataset-common_voice #dataset-mozilla-foundation/common_voice_6_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
feature-extraction | transformers |
This model is created for research study which contains backdoor inside the model. Please use it for academic research, don't use it for business scenarios.
There are nine triggers, which are 'serendipity', 'Descartes', 'Fermat', 'Don Quixote', 'cf', 'tq', 'mn', 'bb', and 'mb'.
Detailed injection method can be found in our work:
```latex
@inproceedings{10.1145/3460120.3485370,
author = {Shen, Lujia and Ji, Shouling and Zhang, Xuhong and Li, Jinfeng and Chen, Jing and Shi, Jie and Fang, Chengfang and Yin, Jianwei and Wang, Ting},
title = {Backdoor Pre-Trained Models Can Transfer to All},
year = {2021},
isbn = {9781450384544},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3460120.3485370},
doi = {10.1145/3460120.3485370},
booktitle = {Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security},
pages = {3141–3158},
numpages = {18},
keywords = {pre-trained model, backdoor attack, natural language processing},
location = {Virtual Event, Republic of Korea},
series = {CCS '21}
}
``` | {} | Lujia/backdoored_bert | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #safetensors #bert #feature-extraction #endpoints_compatible #region-us
|
This model is created for research study which contains backdoor inside the model. Please use it for academic research, don't use it for business scenarios.
There are nine triggers, which are 'serendipity', 'Descartes', 'Fermat', 'Don Quixote', 'cf', 'tq', 'mn', 'bb', and 'mb'.
Detailed injection method can be found in our work:
| [] | [
"TAGS\n#transformers #pytorch #jax #safetensors #bert #feature-extraction #endpoints_compatible #region-us \n"
] |
summarization | transformers | This is *t5-base* transformer model trained on Lithuanian news summaries for 175 000 steps.
It was created during the work [**Generating abstractive summaries of Lithuanian
news articles using a transformer model**](https://link.springer.com/chapter/10.1007/978-3-030-88304-1_27).
## Usage
```python
from transformers import pipeline
name= "LukasStankevicius/t5-base-lithuanian-news-summaries-175"
my_pipeline = pipeline(task="text2text-generation", model=name, framework="pt")
```
Given the following article body from [15min](https://www.15min.lt/24sek/naujiena/lietuva/tarp-penkiu-rezultatyviausiu-tsrs-rinktines-visu-laiku-zaideju-trys-lietuviai-875-1380030):
```
text = """
Latvijos krepšinio legenda Valdis Valteris pirmadienį socialiniame tinkle pasidalino statistika, kurios viršūnėje yra Arvydas Sabonis.
1982 metais TSRS rinktinėje debiutavęs 222 cm ūgio vidurio puolėjas su raudona apranga sužaidė 52 rungtynes, per kurias rinko po 15,6 taško. Tai pats aukščiausias rezultatyvumo vidurkis tarp visų sovietų komandai atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė ne mažiau nei 50 rungtynių. Antras šioje rikiuotėje kitas buvęs Kauno „Žalgirio“ krepšininkas Rimas Kurtinaitis. Jis debiutavo TSRS rinktinėje vėliau nei Sabas, – 1984 metais, bet irgi sužaidė 52 mačus. R.Kurtinaitis pelnė po 15 taškų. 25-ių rezultatyviausių žaidėjų sąrašu pasidalinęs latvis V.Valteris, pelnęs po 13,8 taško, yra trečias.
Ketvirtas yra iš Kazachstano kilęs Valerijus Tichonenka, pelnęs po 13,7 taško per 79 rungtynes. Rezultatyviausią visų laikų TSRS rinktinės penketą uždaro Modestas Paulauskas. Lietuvos krepšinio legenda pelnė po 13,6 taško per 84 mačus.
Dešimtuke taip pat yra Oleksandras Volkovas (po 13,5 taško), Sergejus Belovas (12,7), Anatolijus Myškinas (po 12,3), Vladimiras Tkačenka (11,7) ir Aleksandras Salnikovas (11,4). Dvyliktas šiame sąraše yra Valdemaras Chomičius, vidutiniškai rinkęs po 10 taškų, o keturioliktas dar vienas buvęs žalgirietis Sergejus Jovaiša (po 9,8 taško). Šarūno Marčiulionio rezultatyvumo vidurkis turėjo būti aukštesnis, bet jis sužaidė mažiau nei 50 rungtynių. Kaip žinia, Lietuvai išsilaisvinus ir atkūrus Nepriklausomybę, visi minėti mūsų šalies krepšininkai, išskyrus karjerą jau baigusį M.Paulauską, užsivilko žalią aprangą ir atstovavo savo tėvynei.
A.Sabonis pagal rezultatyvumo vidurkį yra pirmas – jis Lietuvos rinktinei pelnė po 20 taškų. Antras pagal taškų vidurkį yra Artūras Karnišovas, rinkęs po 18,2 taško ir pelnęs iš viso daugiausiai taškų atstovaujant Lietuvos rinktinei (1453).
Tarp žaidėjų, kurie sužaidė bent po 50 oficialių rungtynių Lietuvos rinktinėje, trečią vietą užima Ramūnas Šiškauskas (po 12,9), ketvirtąją Linas Kleiza (po 12,7 taško), o penktas – Saulius Štombergas (po 11,1 taško). Daugiausiai rungtynių Lietuvos rinktinėje sužaidęs ir daugiausiai olimpinių medalių (3) su ja laimėjęs Gintaras Einikis rinko po 9,6 taško, o pirmajame trejete pagal rungtynių skaičių ir pelnytus taškus esantis Šarūnas Jasikevičius pelnė po 9,9 taško.
"""
text = ' '.join(text.strip().split())
```
The summary can be obtained by:
```python
my_pipeline(text)[0]["generated_text"]
```
Output from above would be:
Lietuvos krepšinio federacijos (LKF) prezidento Arvydo Sabonio rezultatyvumo vidurkis yra aukščiausias tarp visų Sovietų Sąjungos rinktinėje atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė bent po 50 oficialių rungtynių.
If you find our work useful, please cite the following paper:
``` latex
@InProceedings{10.1007/978-3-030-88304-1_27,
author="Stankevi{\v{c}}ius, Lukas
and Luko{\v{s}}evi{\v{c}}ius, Mantas",
editor="Lopata, Audrius
and Gudonien{\.{e}}, Daina
and Butkien{\.{e}}, Rita",
title="Generating Abstractive Summaries of Lithuanian News Articles Using a Transformer Model",
booktitle="Information and Software Technologies",
year="2021",
publisher="Springer International Publishing",
address="Cham",
pages="341--352",
abstract="In this work, we train the first monolingual Lithuanian transformer model on a relatively large corpus of Lithuanian news articles and compare various output decoding algorithms for abstractive news summarization. We achieve an average ROUGE-2 score 0.163, generated summaries are coherent and look impressive at first glance. However, some of them contain misleading information that is not so easy to spot. We describe all the technical details and share our trained model and accompanying code in an online open-source repository, as well as some characteristic samples of the generated summaries.",
isbn="978-3-030-88304-1"
}
``` | {"language": "lt", "license": "apache-2.0", "tags": ["t5", "Lithuanian", "summarization"], "widget": [{"text": "Latvijos krep\u0161inio legenda Valdis Valteris pirmadien\u012f socialiniame tinkle pasidalino statistika, kurios vir\u0161\u016bn\u0117je yra Arvydas Sabonis. 1982 metais TSRS rinktin\u0117je debiutav\u0119s 222 cm \u016bgio vidurio puol\u0117jas su raudona apranga su\u017eaid\u0117 52 rungtynes, per kurias rinko po 15,6 ta\u0161ko. Tai pats auk\u0161\u010diausias rezultatyvumo vidurkis tarp vis\u0173 soviet\u0173 komandai atstovavusi\u0173 \u017eaid\u0117j\u0173, skai\u010diuojant tuos, kurie su\u017eaid\u0117 ne ma\u017eiau nei 50 rungtyni\u0173. Antras \u0161ioje rikiuot\u0117je kitas buv\u0119s Kauno \u201e\u017dalgirio\u201c krep\u0161ininkas Rimas Kurtinaitis. Jis debiutavo TSRS rinktin\u0117je v\u0117liau nei Sabas, \u2013 1984 metais, bet irgi su\u017eaid\u0117 52 ma\u010dus. R.Kurtinaitis peln\u0117 po 15 ta\u0161k\u0173. 25-i\u0173 rezultatyviausi\u0173 \u017eaid\u0117j\u0173 s\u0105ra\u0161u pasidalin\u0119s latvis V.Valteris, peln\u0119s po 13,8 ta\u0161ko, yra tre\u010dias. Ketvirtas yra i\u0161 Kazachstano kil\u0119s Valerijus Tichonenka, peln\u0119s po 13,7 ta\u0161ko per 79 rungtynes. Rezultatyviausi\u0105 vis\u0173 laik\u0173 TSRS rinktin\u0117s penket\u0105 u\u017edaro Modestas Paulauskas. Lietuvos krep\u0161inio legenda peln\u0117 po 13,6 ta\u0161ko per 84 ma\u010dus. De\u0161imtuke taip pat yra Oleksandras Volkovas (po 13,5 ta\u0161ko), Sergejus Belovas (12,7), Anatolijus My\u0161kinas (po 12,3), Vladimiras Tka\u010denka (11,7) ir Aleksandras Salnikovas (11,4). Dvyliktas \u0161iame s\u0105ra\u0161e yra Valdemaras Chomi\u010dius, vidutini\u0161kai rink\u0119s po 10 ta\u0161k\u0173, o keturioliktas dar vienas buv\u0119s \u017ealgirietis Sergejus Jovai\u0161a (po 9,8 ta\u0161ko). \u0160ar\u016bno Mar\u010diulionio rezultatyvumo vidurkis tur\u0117jo b\u016bti auk\u0161tesnis, bet jis su\u017eaid\u0117 ma\u017eiau nei 50 rungtyni\u0173. Kaip \u017einia, Lietuvai i\u0161silaisvinus ir atk\u016brus Nepriklausomyb\u0119, visi min\u0117ti m\u016bs\u0173 \u0161alies krep\u0161ininkai, i\u0161skyrus karjer\u0105 jau baigus\u012f M.Paulausk\u0105, u\u017esivilko \u017eali\u0105 aprang\u0105 ir atstovavo savo t\u0117vynei. A.Sabonis pagal rezultatyvumo vidurk\u012f yra pirmas \u2013 jis Lietuvos rinktinei peln\u0117 po 20 ta\u0161k\u0173. Antras pagal ta\u0161k\u0173 vidurk\u012f yra Art\u016bras Karni\u0161ovas, rink\u0119s po 18,2 ta\u0161ko ir peln\u0119s i\u0161 viso daugiausiai ta\u0161k\u0173 atstovaujant Lietuvos rinktinei (1453). Tarp \u017eaid\u0117j\u0173, kurie su\u017eaid\u0117 bent po 50 oficiali\u0173 rungtyni\u0173 Lietuvos rinktin\u0117je, tre\u010di\u0105 viet\u0105 u\u017eima Ram\u016bnas \u0160i\u0161kauskas (po 12,9), ketvirt\u0105j\u0105 Linas Kleiza (po 12,7 ta\u0161ko), o penktas \u2013 Saulius \u0160tombergas (po 11,1 ta\u0161ko). Daugiausiai rungtyni\u0173 Lietuvos rinktin\u0117je su\u017eaid\u0119s ir daugiausiai olimpini\u0173 medali\u0173 (3) su ja laim\u0117j\u0119s Gintaras Einikis rinko po 9,6 ta\u0161ko, o pirmajame trejete pagal rungtyni\u0173 skai\u010di\u0173 ir pelnytus ta\u0161kus esantis \u0160ar\u016bnas Jasikevi\u010dius peln\u0117 po 9,9 ta\u0161ko."}]} | LukasStankevicius/t5-base-lithuanian-news-summaries-175 | null | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"Lithuanian",
"summarization",
"lt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"lt"
] | TAGS
#transformers #pytorch #jax #t5 #text2text-generation #Lithuanian #summarization #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
| This is *t5-base* transformer model trained on Lithuanian news summaries for 175 000 steps.
It was created during the work Generating abstractive summaries of Lithuanian
news articles using a transformer model.
## Usage
Given the following article body from 15min:
The summary can be obtained by:
Output from above would be:
Lietuvos krepšinio federacijos (LKF) prezidento Arvydo Sabonio rezultatyvumo vidurkis yra aukščiausias tarp visų Sovietų Sąjungos rinktinėje atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė bent po 50 oficialių rungtynių.
If you find our work useful, please cite the following paper:
| [
"## Usage\n\nGiven the following article body from 15min:\n\nThe summary can be obtained by:\n\nOutput from above would be:\n\nLietuvos krepšinio federacijos (LKF) prezidento Arvydo Sabonio rezultatyvumo vidurkis yra aukščiausias tarp visų Sovietų Sąjungos rinktinėje atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė bent po 50 oficialių rungtynių.\n\n\nIf you find our work useful, please cite the following paper:"
] | [
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #Lithuanian #summarization #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## Usage\n\nGiven the following article body from 15min:\n\nThe summary can be obtained by:\n\nOutput from above would be:\n\nLietuvos krepšinio federacijos (LKF) prezidento Arvydo Sabonio rezultatyvumo vidurkis yra aukščiausias tarp visų Sovietų Sąjungos rinktinėje atstovavusių žaidėjų, skaičiuojant tuos, kurie sužaidė bent po 50 oficialių rungtynių.\n\n\nIf you find our work useful, please cite the following paper:"
] |
text-generation | transformers |
# Issei Hyoudou DialoGPT Model | {"tags": ["conversational"]} | Lurka/DialoGPT-medium-isseibot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Issei Hyoudou DialoGPT Model | [
"# Issei Hyoudou DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Issei Hyoudou DialoGPT Model"
] |
text-generation | transformers |
# Yui DialoGPT Model | {"tags": ["conversational"]} | Lurka/DialoGPT-medium-kon | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Yui DialoGPT Model | [
"# Yui DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Yui DialoGPT Model"
] |
text-generation | transformers |
# Tyrion DialoGPT Model | {"tags": ["conversational"]} | Luxiere/DialoGPT-medium-tyrion | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Tyrion DialoGPT Model | [
"# Tyrion DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tyrion DialoGPT Model"
] |
text-classification | transformers |
# BERT Reranker for MS-MARCO Document Ranking
## Model description
A text reranker trained for BM25 retriever on MS MARCO document dataset.
## Intended uses & limitations
It is possible to work with other retrievers like but using aligned BM25 works the best.
We used anserini toolkit's BM25 implementation and indexed with tuned parameters (k1=3.8, b=0.87) following [this instruction](https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-doc.md).
#### How to use
See our [project repo page](https://github.com/luyug/Reranker).
## Eval results
MRR @10: 0.423 on Dev.
### BibTeX entry and citation info
```bibtex
@inproceedings{gao2021lce,
title={Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline},
author={Luyu Gao and Zhuyun Dai and Jamie Callan},
year={2021},
booktitle={The 43rd European Conference On Information Retrieval (ECIR)},
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["text reranking"], "datasets": ["MS MARCO document ranking"]} | Luyu/bert-base-mdoc-bm25 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"text reranking",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #text reranking #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT Reranker for MS-MARCO Document Ranking
## Model description
A text reranker trained for BM25 retriever on MS MARCO document dataset.
## Intended uses & limitations
It is possible to work with other retrievers like but using aligned BM25 works the best.
We used anserini toolkit's BM25 implementation and indexed with tuned parameters (k1=3.8, b=0.87) following this instruction.
#### How to use
See our project repo page.
## Eval results
MRR @10: 0.423 on Dev.
### BibTeX entry and citation info
| [
"# BERT Reranker for MS-MARCO Document Ranking",
"## Model description\n\nA text reranker trained for BM25 retriever on MS MARCO document dataset.",
"## Intended uses & limitations\nIt is possible to work with other retrievers like but using aligned BM25 works the best.\n\nWe used anserini toolkit's BM25 implementation and indexed with tuned parameters (k1=3.8, b=0.87) following this instruction.",
"#### How to use\nSee our project repo page.",
"## Eval results\nMRR @10: 0.423 on Dev.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #text reranking #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT Reranker for MS-MARCO Document Ranking",
"## Model description\n\nA text reranker trained for BM25 retriever on MS MARCO document dataset.",
"## Intended uses & limitations\nIt is possible to work with other retrievers like but using aligned BM25 works the best.\n\nWe used anserini toolkit's BM25 implementation and indexed with tuned parameters (k1=3.8, b=0.87) following this instruction.",
"#### How to use\nSee our project repo page.",
"## Eval results\nMRR @10: 0.423 on Dev.",
"### BibTeX entry and citation info"
] |
text-classification | transformers |
# BERT Reranker for MS-MARCO Document Ranking
## Model description
A text reranker trained for HDCT retriever on MS MARCO document dataset.
## Intended uses & limitations
It is possible to work with other retrievers like BM25 but using aligned HDCT works the best.
#### How to use
See our [project repo page](https://github.com/luyug/Reranker).
## Eval results
MRR @10: 0.434 on Dev.
MRR @10: 0.382 on Eval.
### BibTeX entry and citation info
```bibtex
@inproceedings{gao2021lce,
title={Rethink Training of BERT Rerankers in Multi-Stage Retrieval Pipeline},
author={Luyu Gao and Zhuyun Dai and Jamie Callan},
year={2021},
booktitle={The 43rd European Conference On Information Retrieval (ECIR)},
}
``` | {"language": ["en"], "license": "apache-2.0", "tags": ["text reranking"], "datasets": ["MS MARCO document ranking"]} | Luyu/bert-base-mdoc-hdct | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"text reranking",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #jax #bert #text-classification #text reranking #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT Reranker for MS-MARCO Document Ranking
## Model description
A text reranker trained for HDCT retriever on MS MARCO document dataset.
## Intended uses & limitations
It is possible to work with other retrievers like BM25 but using aligned HDCT works the best.
#### How to use
See our project repo page.
## Eval results
MRR @10: 0.434 on Dev.
MRR @10: 0.382 on Eval.
### BibTeX entry and citation info
| [
"# BERT Reranker for MS-MARCO Document Ranking",
"## Model description\n\nA text reranker trained for HDCT retriever on MS MARCO document dataset.",
"## Intended uses & limitations\nIt is possible to work with other retrievers like BM25 but using aligned HDCT works the best.",
"#### How to use\nSee our project repo page.",
"## Eval results\nMRR @10: 0.434 on Dev.\nMRR @10: 0.382 on Eval.",
"### BibTeX entry and citation info"
] | [
"TAGS\n#transformers #pytorch #jax #bert #text-classification #text reranking #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT Reranker for MS-MARCO Document Ranking",
"## Model description\n\nA text reranker trained for HDCT retriever on MS MARCO document dataset.",
"## Intended uses & limitations\nIt is possible to work with other retrievers like BM25 but using aligned HDCT works the best.",
"#### How to use\nSee our project repo page.",
"## Eval results\nMRR @10: 0.434 on Dev.\nMRR @10: 0.382 on Eval.",
"### BibTeX entry and citation info"
] |
null | null | It's a sentiment inference model base on bert. | {} | LzLzLz/Bert | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| It's a sentiment inference model base on bert. | [] | [
"TAGS\n#region-us \n"
] |
feature-extraction | transformers | <br />
<p align="center">
<h1 align="center">M-BERT Base 69</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Base%2069">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('M-BERT-Base-40')
embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?'])
print(embeddings.shape)
# Yields: torch.Size([3, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [BERT-base-multilingual](https://huggingface.co/bert-base-multilingual-cased) tuned to match the embedding space for [69 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 4069languages used during fine-tuning can be found in [SupportedLanguages.md](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md).
Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language.
All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages.
| {} | M-CLIP/M-BERT-Base-69 | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us
| <br />
<p align="center">
<h1 align="center">M-BERT Base 69</h1>
<p align="center">
<a href="URL Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.
Once this is done, you can load and use the model with the following code
## About
A BERT-base-multilingual tuned to match the embedding space for 69 languages, to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
A full list of the 100 languages used during pre-training can be found here, and a list of the 4069languages used during fine-tuning can be found in URL.
Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into the corresponding language.
All translation was done using the AWS translate service, the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages.
| [
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\n\nOnce this is done, you can load and use the model with the following code",
"## About\nA BERT-base-multilingual tuned to match the embedding space for 69 languages, to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>\nA full list of the 100 languages used during pre-training can be found here, and a list of the 4069languages used during fine-tuning can be found in URL.\n\nTraining data pairs was generated by sampling 40k sentences for each language from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into the corresponding language.\nAll translation was done using the AWS translate service, the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages."
] | [
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us \n",
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\n\nOnce this is done, you can load and use the model with the following code",
"## About\nA BERT-base-multilingual tuned to match the embedding space for 69 languages, to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>\nA full list of the 100 languages used during pre-training can be found here, and a list of the 4069languages used during fine-tuning can be found in URL.\n\nTraining data pairs was generated by sampling 40k sentences for each language from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into the corresponding language.\nAll translation was done using the AWS translate service, the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages."
] |
feature-extraction | transformers | <br />
<p align="center">
<h1 align="center">M-BERT Base ViT-B</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Base%20ViT-B">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('M-BERT-Base-ViT')
embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?'])
print(embeddings.shape)
# Yields: torch.Size([3, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [BERT-base-multilingual](https://huggingface.co/bert-base-multilingual-cased) tuned to match the embedding space for [69 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the ViT-B/32 vision encoder. <br>
A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 4069languages used during fine-tuning can be found in [SupportedLanguages.md](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md).
Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language.
All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages.
| {} | M-CLIP/M-BERT-Base-ViT-B | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #jax #bert #feature-extraction #endpoints_compatible #region-us
| <br />
<p align="center">
<h1 align="center">M-BERT Base ViT-B</h1>
<p align="center">
<a href="URL Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.
Once this is done, you can load and use the model with the following code
## About
A BERT-base-multilingual tuned to match the embedding space for 69 languages, to the embedding space of the CLIP text encoder which accompanies the ViT-B/32 vision encoder. <br>
A full list of the 100 languages used during pre-training can be found here, and a list of the 4069languages used during fine-tuning can be found in URL.
Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into the corresponding language.
All translation was done using the AWS translate service, the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages.
| [
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\n\nOnce this is done, you can load and use the model with the following code",
"## About\nA BERT-base-multilingual tuned to match the embedding space for 69 languages, to the embedding space of the CLIP text encoder which accompanies the ViT-B/32 vision encoder. <br>\nA full list of the 100 languages used during pre-training can be found here, and a list of the 4069languages used during fine-tuning can be found in URL.\n\nTraining data pairs was generated by sampling 40k sentences for each language from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into the corresponding language.\nAll translation was done using the AWS translate service, the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages."
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #feature-extraction #endpoints_compatible #region-us \n",
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\n\nOnce this is done, you can load and use the model with the following code",
"## About\nA BERT-base-multilingual tuned to match the embedding space for 69 languages, to the embedding space of the CLIP text encoder which accompanies the ViT-B/32 vision encoder. <br>\nA full list of the 100 languages used during pre-training can be found here, and a list of the 4069languages used during fine-tuning can be found in URL.\n\nTraining data pairs was generated by sampling 40k sentences for each language from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into the corresponding language.\nAll translation was done using the AWS translate service, the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages."
] |
feature-extraction | transformers |
<br />
<p align="center">
<h1 align="center">M-BERT Distil 40</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('M-BERT-Distil-40')
embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?'])
print(embeddings.shape)
# Yields: torch.Size([3, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [distilbert-base-multilingual](https://huggingface.co/distilbert-base-multilingual-cased) tuned to match the embedding space for [40 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Distil%2040/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 40 languages used during fine-tuning can be found in [SupportedLanguages.md](Fine-Tune-Languages.md).
Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language.
All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 40 languages.
## Evaluation
[These results can be viewed at Github](https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Distil%2040). <br>
A non-rigorous qualitative evaluation shows that for the languages French, German, Spanish, Russian, Swedish and Greek it seemingly yields respectable results for most instances. The exception being that Greeks are apparently unable to recognize happy persons. <br>
When testing on Kannada, a language which was included during pre-training but not fine-tuning, it performed close to random
| {"language": ["sq", "am", "ar", "az", "bn", "bg", "ca", "zh", "nl", "en", "et", "fa", "fr", "ka", "de", "el", "hi", "hu", "is", "id", "it", "ja", "kk", "ko", "lv", "mk", "ms", "ps", "pl", "ro", "ru", "sl", "es", "sv", "tl", "th", "tr", "ur"]} | M-CLIP/M-BERT-Distil-40 | null | [
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sq",
"am",
"ar",
"az",
"bn",
"bg",
"ca",
"zh",
"nl",
"en",
"et",
"fa",
"fr",
"ka",
"de",
"el",
"hi",
"hu",
"is",
"id",
"it",
"ja",
"kk",
"ko",
"lv",
"mk",
"ms",
"ps",
"pl",
"ro",
"ru",
"sl",
"es",
"sv",
"tl",
"th",
"tr",
"ur",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sq",
"am",
"ar",
"az",
"bn",
"bg",
"ca",
"zh",
"nl",
"en",
"et",
"fa",
"fr",
"ka",
"de",
"el",
"hi",
"hu",
"is",
"id",
"it",
"ja",
"kk",
"ko",
"lv",
"mk",
"ms",
"ps",
"pl",
"ro",
"ru",
"sl",
"es",
"sv",
"tl",
"th",
"tr",
"ur"
] | TAGS
#transformers #pytorch #distilbert #feature-extraction #sq #am #ar #az #bn #bg #ca #zh #nl #en #et #fa #fr #ka #de #el #hi #hu #is #id #it #ja #kk #ko #lv #mk #ms #ps #pl #ro #ru #sl #es #sv #tl #th #tr #ur #endpoints_compatible #region-us
|
<br />
<p align="center">
<h1 align="center">M-BERT Distil 40</h1>
<p align="center">
<a href="URL Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.
Once this is done, you can load and use the model with the following code
## About
A distilbert-base-multilingual tuned to match the embedding space for 40 languages, to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
A full list of the 100 languages used during pre-training can be found here, and a list of the 40 languages used during fine-tuning can be found in URL.
Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into the corresponding language.
All translation was done using the AWS translate service, the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 40 languages.
## Evaluation
These results can be viewed at Github. <br>
A non-rigorous qualitative evaluation shows that for the languages French, German, Spanish, Russian, Swedish and Greek it seemingly yields respectable results for most instances. The exception being that Greeks are apparently unable to recognize happy persons. <br>
When testing on Kannada, a language which was included during pre-training but not fine-tuning, it performed close to random
| [
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\n\nOnce this is done, you can load and use the model with the following code",
"## About\nA distilbert-base-multilingual tuned to match the embedding space for 40 languages, to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>\nA full list of the 100 languages used during pre-training can be found here, and a list of the 40 languages used during fine-tuning can be found in URL.\n\nTraining data pairs was generated by sampling 40k sentences for each language from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into the corresponding language.\nAll translation was done using the AWS translate service, the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 40 languages.",
"## Evaluation\nThese results can be viewed at Github. <br>\nA non-rigorous qualitative evaluation shows that for the languages French, German, Spanish, Russian, Swedish and Greek it seemingly yields respectable results for most instances. The exception being that Greeks are apparently unable to recognize happy persons. <br>\nWhen testing on Kannada, a language which was included during pre-training but not fine-tuning, it performed close to random"
] | [
"TAGS\n#transformers #pytorch #distilbert #feature-extraction #sq #am #ar #az #bn #bg #ca #zh #nl #en #et #fa #fr #ka #de #el #hi #hu #is #id #it #ja #kk #ko #lv #mk #ms #ps #pl #ro #ru #sl #es #sv #tl #th #tr #ur #endpoints_compatible #region-us \n",
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\n\nOnce this is done, you can load and use the model with the following code",
"## About\nA distilbert-base-multilingual tuned to match the embedding space for 40 languages, to the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>\nA full list of the 100 languages used during pre-training can be found here, and a list of the 40 languages used during fine-tuning can be found in URL.\n\nTraining data pairs was generated by sampling 40k sentences for each language from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into the corresponding language.\nAll translation was done using the AWS translate service, the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 40 languages.",
"## Evaluation\nThese results can be viewed at Github. <br>\nA non-rigorous qualitative evaluation shows that for the languages French, German, Spanish, Russian, Swedish and Greek it seemingly yields respectable results for most instances. The exception being that Greeks are apparently unable to recognize happy persons. <br>\nWhen testing on Kannada, a language which was included during pre-training but not fine-tuning, it performed close to random"
] |
feature-extraction | transformers |
<br />
<p align="center">
<h1 align="center">Swe-CLIP 2M</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%202M">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('Swe-CLIP-500k')
embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
Training data pairs was generated by sampling 2 Million sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish.
All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
| {"language": "sv"} | M-CLIP/Swedish-2M | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"sv",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #jax #bert #feature-extraction #sv #endpoints_compatible #region-us
|
<br />
<p align="center">
<h1 align="center">Swe-CLIP 2M</h1>
<p align="center">
<a href="URL Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.
Once this is done, you can load and use the model with the following code
## About
A KB/Bert-Swedish-Cased tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
Training data pairs was generated by sampling 2 Million sentences from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into Swedish.
All translation was done using the Huggingface Opus Model, which seemingly procudes higher quality translations than relying on the AWS translate service.
| [
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\nOnce this is done, you can load and use the model with the following code",
"## About\nA KB/Bert-Swedish-Cased tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>\n\nTraining data pairs was generated by sampling 2 Million sentences from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into Swedish.\nAll translation was done using the Huggingface Opus Model, which seemingly procudes higher quality translations than relying on the AWS translate service."
] | [
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #sv #endpoints_compatible #region-us \n",
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\nOnce this is done, you can load and use the model with the following code",
"## About\nA KB/Bert-Swedish-Cased tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>\n\nTraining data pairs was generated by sampling 2 Million sentences from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into Swedish.\nAll translation was done using the Huggingface Opus Model, which seemingly procudes higher quality translations than relying on the AWS translate service."
] |
feature-extraction | transformers |
<br />
<p align="center">
<h1 align="center">Swe-CLIP 500k</h1>
<p align="center">
<a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/Swe-CLIP%20500k">Github Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP).
Once this is done, you can load and use the model with the following code
```python
from src import multilingual_clip
model = multilingual_clip.load_model('Swe-CLIP-500k')
embeddings = model(['Älgen är skogens konung!', 'Alla isbjörnar är vänsterhänta'])
print(embeddings.shape)
# Yields: torch.Size([2, 640])
```
<!-- ABOUT THE PROJECT -->
## About
A [KB/Bert-Swedish-Cased](https://huggingface.co/KB/bert-base-swedish-cased) tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
Training data pairs was generated by sampling 500k sentences from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into Swedish.
All translation was done using the [Huggingface Opus Model](https://huggingface.co/Helsinki-NLP/opus-mt-en-sv), which seemingly procudes higher quality translations than relying on the [AWS translate service](https://aws.amazon.com/translate/).
| {"language": "sv"} | M-CLIP/Swedish-500k | null | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"sv",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"sv"
] | TAGS
#transformers #pytorch #jax #bert #feature-extraction #sv #endpoints_compatible #region-us
|
<br />
<p align="center">
<h1 align="center">Swe-CLIP 500k</h1>
<p align="center">
<a href="URL Model Card</a>
</p>
</p>
## Usage
To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.
Once this is done, you can load and use the model with the following code
## About
A KB/Bert-Swedish-Cased tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>
Training data pairs was generated by sampling 500k sentences from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into Swedish.
All translation was done using the Huggingface Opus Model, which seemingly procudes higher quality translations than relying on the AWS translate service.
| [
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\nOnce this is done, you can load and use the model with the following code",
"## About\nA KB/Bert-Swedish-Cased tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>\n\nTraining data pairs was generated by sampling 500k sentences from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into Swedish.\nAll translation was done using the Huggingface Opus Model, which seemingly procudes higher quality translations than relying on the AWS translate service."
] | [
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #sv #endpoints_compatible #region-us \n",
"## Usage\nTo use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the Multilingual-CLIP Github.\nOnce this is done, you can load and use the model with the following code",
"## About\nA KB/Bert-Swedish-Cased tuned to match the embedding space of the CLIP text encoder which accompanies the Res50x4 vision encoder. <br>\n\nTraining data pairs was generated by sampling 500k sentences from the combined descriptions of GCC + MSCOCO + VizWiz, and translating them into Swedish.\nAll translation was done using the Huggingface Opus Model, which seemingly procudes higher quality translations than relying on the AWS translate service."
] |
text-classification | transformers | # BERT-mini model finetuned with M-FAC
This model is finetuned on MNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on MNLI validation set:
```bash
matched_accuracy = 75.13
mismatched_accuracy = 75.93
```
Mean and standard deviation for 5 runs on MNLI validation set:
| | Matched Accuracy | Mismatched Accuracy |
|:-----:|:----------------:|:-------------------:|
| Adam | 73.30 ± 0.20 | 74.85 ± 0.09 |
| M-FAC | 74.59 ± 0.41 | 75.95 ± 0.14 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-mini \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-mini-finetuned-mnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-mini model finetuned with M-FAC
====================================
This model is finetuned on MNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on MNLI validation set:
Mean and standard deviation for 5 runs on MNLI validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-mini model finetuned with M-FAC
This model is finetuned on MRPC dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 512
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on MRPC validation set:
```bash
f1 = 86.51
accuracy = 81.12
```
Mean and standard deviation for 5 runs on MRPC validation set:
| | F1 | Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 84.57 ± 0.36| 76.57 ± 0.80|
| M-FAC | 85.06 ± 1.63 | 78.87 ± 2.33 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 1234 \
--model_name_or_path prajjwal1/bert-mini \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-mini-finetuned-mrpc | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-mini model finetuned with M-FAC
====================================
This model is finetuned on MRPC dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on MRPC validation set:
Mean and standard deviation for 5 runs on MRPC validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-mini model finetuned with M-FAC
This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QNLI validation set:
```bash
accuracy = 83.90
```
Mean and standard deviation for 5 runs on QNLI validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 83.85 ± 0.10 |
| M-FAC | 83.70 ± 0.13 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-mini \
--task_name qnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-mini-finetuned-qnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-mini model finetuned with M-FAC
====================================
This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on QNLI validation set:
Mean and standard deviation for 5 runs on QNLI validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-mini model finetuned with M-FAC
This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QQP validation set:
```bash
f1 = 82.98
accuracy = 87.03
```
Mean and standard deviation for 5 runs on QQP validation set:
| | F1 | Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 82.43 ± 0.10 | 86.45 ± 0.12 |
| M-FAC | 82.67 ± 0.23 | 86.75 ± 0.20 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 10723 \
--model_name_or_path prajjwal1/bert-mini \
--task_name qqp \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-mini-finetuned-qqp | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-mini model finetuned with M-FAC
====================================
This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on QQP validation set:
Mean and standard deviation for 5 runs on QQP validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
question-answering | transformers | # BERT-mini model finetuned with M-FAC
This model is finetuned on SQuAD version 2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on SQuAD version 2 validation set:
```bash
exact_match = 58.38
f1 = 61.65
```
Mean and standard deviation for 5 runs on SQuAD version 2 validation set:
| | Exact Match | F1 |
|:----:|:-----------:|:----:|
| Adam | 54.80 ± 0.47 | 58.13 ± 0.31 |
| M-FAC | 58.02 ± 0.39 | 61.35 ± 0.24 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_qa.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-mini \
--dataset_name squad_v2 \
--version_2_with_negative \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 1e-4 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-mini-finetuned-squadv2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"arxiv:2107.03356",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #question-answering #arxiv-2107.03356 #endpoints_compatible #region-us
| BERT-mini model finetuned with M-FAC
====================================
This model is finetuned on SQuAD version 2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on SQuAD version 2 validation set:
Mean and standard deviation for 5 runs on SQuAD version 2 validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #question-answering #arxiv-2107.03356 #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-mini model finetuned with M-FAC
This model is finetuned on SST-2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on SST-2 validation set:
```bash
accuracy = 84.74
```
Mean and standard deviation for 5 runs on SST-2 validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 85.46 ± 0.58 |
| M-FAC | 84.20 ± 0.58 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 1234 \
--model_name_or_path prajjwal1/bert-mini \
--task_name sst2 \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 3 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-mini-finetuned-sst2 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-mini model finetuned with M-FAC
====================================
This model is finetuned on SST-2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on SST-2 validation set:
Mean and standard deviation for 5 runs on SST-2 validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-mini model finetuned with M-FAC
This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 512
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on STS-B validation set:
```bash
pearson = 85.03
spearman = 85.06
```
Mean and standard deviation for 5 runs on STS-B validation set:
| | Pearson | Spearman |
|:----:|:-----------:|:----------:|
| Adam | 82.09 ± 0.54 | 82.64 ± 0.71 |
| M-FAC | 84.66 ± 0.30 | 84.65 ± 0.30 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 7 \
--model_name_or_path prajjwal1/bert-mini \
--task_name stsb \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-mini-finetuned-stsb | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-mini model finetuned with M-FAC
====================================
This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on STS-B validation set:
Mean and standard deviation for 5 runs on STS-B validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-tiny model finetuned with M-FAC
This model is finetuned on MNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on MNLI validation set:
```bash
matched_accuracy = 69.55
mismatched_accuracy = 70.58
```
Mean and standard deviation for 5 runs on MNLI validation set:
| | Matched Accuracy | Mismatched Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 65.36 ± 0.13 | 66.78 ± 0.15 |
| M-FAC | 68.28 ± 3.29 | 68.98 ± 3.05 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-tiny-finetuned-mnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-tiny model finetuned with M-FAC
====================================
This model is finetuned on MNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on MNLI validation set:
Mean and standard deviation for 5 runs on MNLI validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-tiny model finetuned with M-FAC
This model is finetuned on MRPC dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 512
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on MRPC validation set:
```bash
f1 = 83.12
accuracy = 73.52
```
Mean and standard deviation for 5 runs on MRPC validation set:
| | F1 | Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 81.68 ± 0.33 | 69.90 ± 0.32 |
| M-FAC | 82.77 ± 0.22 | 72.94 ± 0.37 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name mrpc \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-tiny-finetuned-mrpc | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-tiny model finetuned with M-FAC
====================================
This model is finetuned on MRPC dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on MRPC validation set:
Mean and standard deviation for 5 runs on MRPC validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-tiny model finetuned with M-FAC
This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QNLI validation set:
```bash
accuracy = 81.54
```
Mean and standard deviation for 5 runs on QNLI validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 77.85 ± 0.15 |
| M-FAC | 81.17 ± 0.43 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 8276 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name qnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-tiny-finetuned-qnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-tiny model finetuned with M-FAC
====================================
This model is finetuned on QNLI dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on QNLI validation set:
Mean and standard deviation for 5 runs on QNLI validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-tiny model finetuned with M-FAC
This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on QQP validation set:
```bash
f1 = 79.84
accuracy = 84.40
```
Mean and standard deviation for 5 runs on QQP validation set:
| | F1 | Accuracy |
|:----:|:-----------:|:----------:|
| Adam | 77.58 ± 0.08 | 81.09 ± 0.15 |
| M-FAC | 79.71 ± 0.13 | 84.29 ± 0.08 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 1234 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name qqp \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-tiny-finetuned-qqp | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-tiny model finetuned with M-FAC
====================================
This model is finetuned on QQP dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on QQP validation set:
Mean and standard deviation for 5 runs on QQP validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
question-answering | transformers | # BERT-tiny model finetuned with M-FAC
This model is finetuned on SQuAD version 2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on SQuAD version 2 validation set:
```bash
exact_match = 50.29
f1 = 52.43
```
Mean and standard deviation for 5 runs on SQuAD version 2 validation set:
| | Exact Match | F1 |
|:----:|:-----------:|:----:|
| Adam | 48.41 ± 0.57 | 49.99 ± 0.54 |
| M-FAC | 49.80 ± 0.43 | 52.18 ± 0.20 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_qa.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--dataset_name squad_v2 \
--version_2_with_negative \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 1e-4 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-tiny-finetuned-squadv2 | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"arxiv:2107.03356",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #question-answering #arxiv-2107.03356 #endpoints_compatible #region-us
| BERT-tiny model finetuned with M-FAC
====================================
This model is finetuned on SQuAD version 2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on SQuAD version 2 validation set:
Mean and standard deviation for 5 runs on SQuAD version 2 validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #question-answering #arxiv-2107.03356 #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-tiny model finetuned with M-FAC
This model is finetuned on SST-2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 1024
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on SST-2 validation set:
```bash
accuracy = 83.02
```
Mean and standard deviation for 5 runs on SST-2 validation set:
| | Accuracy |
|:----:|:-----------:|
| Adam | 80.11 ± 0.65 |
| M-FAC | 81.86 ± 0.76 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 42 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name sst2 \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 3 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 1024, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-tiny-finetuned-sst2 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-tiny model finetuned with M-FAC
====================================
This model is finetuned on SST-2 dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on SST-2 validation set:
Mean and standard deviation for 5 runs on SST-2 validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT-tiny model finetuned with M-FAC
This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf).
## Finetuning setup
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
```bash
learning rate = 1e-4
number of gradients = 512
dampening = 1e-6
```
## Results
We share the best model out of 5 runs with the following score on STS-B validation set:
```bash
pearson = 80.66
spearman = 81.13
```
Mean and standard deviation for 5 runs on STS-B validation set:
| | Pearson | Spearman |
|:----:|:-----------:|:----------:|
| Adam | 64.39 ± 5.02 | 66.52 ± 5.67 |
| M-FAC | 80.15 ± 0.52 | 80.62 ± 0.43 |
Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script:
```bash
CUDA_VISIBLE_DEVICES=0 python run_glue.py \
--seed 7 \
--model_name_or_path prajjwal1/bert-tiny \
--task_name stsb \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 1e-4 \
--num_train_epochs 5 \
--output_dir out_dir/ \
--optim MFAC \
--optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}'
```
We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC).
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials).
## BibTeX entry and citation info
```bibtex
@article{frantar2021m,
title={M-FAC: Efficient Matrix-Free Approximations of Second-Order Information},
author={Frantar, Elias and Kurtic, Eldar and Alistarh, Dan},
journal={Advances in Neural Information Processing Systems},
volume={35},
year={2021}
}
```
| {} | M-FAC/bert-tiny-finetuned-stsb | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:2107.03356",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2107.03356"
] | [] | TAGS
#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us
| BERT-tiny model finetuned with M-FAC
====================================
This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC.
Check NeurIPS 2021 paper for more details on M-FAC: URL
Finetuning setup
----------------
For fair comparison against default Adam baseline, we finetune the model in the same framework as described here URL and just swap Adam optimizer with M-FAC.
Hyperparameters used by M-FAC optimizer:
Results
-------
We share the best model out of 5 runs with the following score on STS-B validation set:
Mean and standard deviation for 5 runs on STS-B validation set:
Results can be reproduced by adding M-FAC optimizer code in URL and running the following bash script:
We believe these results could be improved with modest tuning of hyperparameters: 'per\_device\_train\_batch\_size', 'learning\_rate', 'num\_train\_epochs', 'num\_grads' and 'damp'. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models ('bert-tiny', 'bert-mini') and all datasets (SQuAD version 2 and GLUE).
Our code for M-FAC can be found here: URL
A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: URL
BibTeX entry and citation info
------------------------------
| [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #arxiv-2107.03356 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers |
# Spanish News Classification Headlines
SNCH: this model was develop by [M47Labs](https://www.m47labs.com/es/) the goal is text classification, the base model use was [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased), it was fine-tuned on 1000 example dataset.
## Dataset Sample
Dataset size : 1000
Columns: idTask,task content 1,idTag,tag.
|idTask|task content 1|idTag|tag|
|------|------|------|------|
|3637d9ac-119c-4a8f-899c-339cf5b42ae0|Alcalá de Guadaíra celebra la IV Semana de la Diversidad Sexual con acciones de sensibilización|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad|
|d56bab52-0029-45dd-ad90-5c17d4ed4c88|El Archipiélago Chinijo Graciplus se impone en el Trofeo Centro Comercial Rubicón|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes|
|dec70bc5-4932-4fa2-aeac-31a52377be02|Un total de 39 personas padecen ELA actualmente en la provincia|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad|
|fb396ba9-fbf1-4495-84d9-5314eb731405|Eurocopa 2021 : Italia vence a Gales y pasa a octavos con su candidatura reforzada|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes|
|bc5a36ca-4e0a-422e-9167-766b41008c01|Resolución de 10 de junio de 2021, del Ayuntamiento de Tarazona de La Mancha (Albacete), referente a la convocatoria para proveer una plaza.|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad|
|a87f8703-ce34-47a5-9c1b-e992c7fe60f6|El primer ministro sueco pierde una moción de censura|209ae89e-55b4-41fd-aac0-5400feab479e|politica|
|d80bdaad-0ad5-43a0-850e-c473fd612526|El dólar se dispara tras la reunión de la Fed|11925830-148e-4890-a2bc-da9dc059dc17|economia|
## Labels:
* ciencia_tecnologia
* clickbait
* cultura
* deportes
* economia
* educacion
* medio_ambiente
* opinion
* politica
* sociedad
## Example of Use
### Pipeline
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
review_text = 'los vehiculos que esten esperando pasajaeros deberan estar apagados para reducir emisiones'
path = "M47Labs/spanish_news_classification_headlines"
tokenizer = AutoTokenizer.from_pretrained(path)
model = BertForSequenceClassification.from_pretrained(path)
nlp = TextClassificationPipeline(task = "text-classification",
model = model,
tokenizer = tokenizer)
print(nlp(review_text))
```
```[{'label': 'medio_ambiente', 'score': 0.5648820996284485}]```
### Pytorch
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
from numpy import np
model_name = 'M47Labs/spanish_news_classification_headlines'
MAX_LEN = 32
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
texto = "las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno"
encoded_review = tokenizer.encode_plus(
texto,
max_length=MAX_LEN,
add_special_tokens=True,
#return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
input_ids = encoded_review['input_ids']
attention_mask = encoded_review['attention_mask']
output = model(input_ids, attention_mask)
_, prediction = torch.max(output['logits'], dim=1)
print(f'Review text: {texto}')
print(f'Sentiment : {model.config.id2label[prediction.detach().cpu().numpy()[0]]}')
```
```Review text: las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno```
```Sentiment : medio_ambiente```
A more in depth example on how to use the model can be found in this colab notebook: https://colab.research.google.com/drive/1XsKea6oMyEckye2FePW_XN7Rf8v41Cw_?usp=sharing
## Finetune Hyperparameters
* MAX_LEN = 32
* TRAIN_BATCH_SIZE = 8
* VALID_BATCH_SIZE = 4
* EPOCHS = 5
* LEARNING_RATE = 1e-05
## Train Results
|n_example|epoch|loss|acc|
|------|------|------|------|
|100|0|2.286327266693115|12.5|
|100|1|2.018876111507416|40.0|
|100|2|1.8016730904579163|43.75|
|100|3|1.6121837735176086|46.25|
|100|4|1.41565443277359|68.75|
|n_example|epoch|loss|acc|
|------|------|------|------|
|500|0|2.0770938420295715|24.5|
|500|1|1.6953029704093934|50.25|
|500|2|1.258900796175003|64.25|
|500|3|0.8342628020048142|78.25|
|500|4|0.5135736921429634|90.25|
|n_example|epoch|loss|acc|
|------|------|------|------|
|1000|0|1.916002897115854|36.1997226074896|
|1000|1|1.2941598492664295|62.2746185852982|
|1000|2|0.8201534710415117|76.97642163661581|
|1000|3|0.524806430051615|86.9625520110957|
|1000|4|0.30662027455784463|92.64909847434119|
## Validation Results
|n_examples|100|
|------|------|
|Accuracy Score|0.35|
|Precision (Macro)|0.35|
|Recall (Macro)|0.16|
|n_examples|500|
|------|------|
|Accuracy Score|0.62|
|Precision (Macro)|0.60|
|Recall (Macro)|0.47|
|n_examples|1000|
|------|------|
|Accuracy Score|0.68|
|Precision(Macro)|0.68|
|Recall (Macro)|0.64|

| {"widget": [{"text": "El d\u00f3lar se dispara tras la reuni\u00f3n de la Fed"}]} | M47Labs/spanish_news_classification_headlines | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
| Spanish News Classification Headlines
=====================================
SNCH: this model was develop by M47Labs the goal is text classification, the base model use was BETO, it was fine-tuned on 1000 example dataset.
Dataset Sample
--------------
Dataset size : 1000
Columns: idTask,task content 1,idTag,tag.
Labels:
-------
* ciencia\_tecnologia
* clickbait
* cultura
* deportes
* economia
* educacion
* medio\_ambiente
* opinion
* politica
* sociedad
Example of Use
--------------
### Pipeline
### Pytorch
A more in depth example on how to use the model can be found in this colab notebook: URL
Finetune Hyperparameters
------------------------
* MAX\_LEN = 32
* TRAIN\_BATCH\_SIZE = 8
* VALID\_BATCH\_SIZE = 4
* EPOCHS = 5
* LEARNING\_RATE = 1e-05
Train Results
-------------
Validation Results
------------------
!alt text
| [
"### Pipeline",
"### Pytorch\n\n\nA more in depth example on how to use the model can be found in this colab notebook: URL\n\n\nFinetune Hyperparameters\n------------------------\n\n\n* MAX\\_LEN = 32\n* TRAIN\\_BATCH\\_SIZE = 8\n* VALID\\_BATCH\\_SIZE = 4\n* EPOCHS = 5\n* LEARNING\\_RATE = 1e-05\n\n\nTrain Results\n-------------\n\n\n\n\n\nValidation Results\n------------------\n\n\n\n\n\n!alt text"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"### Pipeline",
"### Pytorch\n\n\nA more in depth example on how to use the model can be found in this colab notebook: URL\n\n\nFinetune Hyperparameters\n------------------------\n\n\n* MAX\\_LEN = 32\n* TRAIN\\_BATCH\\_SIZE = 8\n* VALID\\_BATCH\\_SIZE = 4\n* EPOCHS = 5\n* LEARNING\\_RATE = 1e-05\n\n\nTrain Results\n-------------\n\n\n\n\n\nValidation Results\n------------------\n\n\n\n\n\n!alt text"
] |
text-generation | transformers |
# Rick Morty DialoGPT Model | {"tags": ["conversational"]} | MAUtastic/DialoGPT-medium-RickandMortyBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Morty DialoGPT Model | [
"# Rick Morty DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Morty DialoGPT Model"
] |
text-generation | transformers |
# Rick Sanchez DialoGPT Model | {"tags": ["conversational"]} | MCUxDaredevil/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Sanchez DialoGPT Model | [
"# Rick Sanchez DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Sanchez DialoGPT Model"
] |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 7121569
## Validation Metrics
- Loss: 0.2151782214641571
- Accuracy: 0.9271
- Precision: 0.9469285415796072
- Recall: 0.9051328140603155
- AUC: 0.9804569416956057
- F1: 0.925559072807107
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/MICADEE/autonlp-imdb-sentiment-analysis2-7121569
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MICADEE/autonlp-imdb-sentiment-analysis2-7121569", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("MICADEE/autonlp-imdb-sentiment-analysis2-7121569", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["MICADEE/autonlp-data-imdb-sentiment-analysis2"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | MICADEE/autonlp-imdb-sentiment-analysis2-7121569 | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:MICADEE/autonlp-data-imdb-sentiment-analysis2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-MICADEE/autonlp-data-imdb-sentiment-analysis2 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 7121569
## Validation Metrics
- Loss: 0.2151782214641571
- Accuracy: 0.9271
- Precision: 0.9469285415796072
- Recall: 0.9051328140603155
- AUC: 0.9804569416956057
- F1: 0.925559072807107
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 7121569",
"## Validation Metrics\n\n- Loss: 0.2151782214641571\n- Accuracy: 0.9271\n- Precision: 0.9469285415796072\n- Recall: 0.9051328140603155\n- AUC: 0.9804569416956057\n- F1: 0.925559072807107",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-MICADEE/autonlp-data-imdb-sentiment-analysis2 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 7121569",
"## Validation Metrics\n\n- Loss: 0.2151782214641571\n- Accuracy: 0.9271\n- Precision: 0.9469285415796072\n- Recall: 0.9051328140603155\n- AUC: 0.9804569416956057\n- F1: 0.925559072807107",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8540
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5219 | 1.0 | 535 | 0.5314 | 0.4095 |
| 0.346 | 2.0 | 1070 | 0.5141 | 0.5054 |
| 0.2294 | 3.0 | 1605 | 0.6351 | 0.5200 |
| 0.1646 | 4.0 | 2140 | 0.7575 | 0.5459 |
| 0.1235 | 5.0 | 2675 | 0.8540 | 0.5495 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5494735380761103, "name": "Matthews Correlation"}]}]}]} | MINYOUNG/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8540
* Matthews Correlation: 0.5495
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text-classification | transformers |
# multilingual-cpv-sector-classifier
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on [the Tenders Economic Daily Public Procurement Data](https://simap.ted.europa.eu/en).
It achieves the following results on the evaluation set:
- F1 Score: 0.686
## Model description
The model takes procurement descriptions written in any of [104 languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) and classifies them into 45 sector classes represented by [CPV(Common Procurement Vocabulary)](https://simap.ted.europa.eu/en_GB/web/simap/cpv) code descriptions as listed below.
| Common Procurement Vocabulary |
|:-----------------------------|
| Administration, defence and social security services. 👮♀️ |
| Agricultural machinery. 🚜 |
| Agricultural, farming, fishing, forestry and related products. 🌾 |
| Agricultural, forestry, horticultural, aquacultural and apicultural services. 👨🏿🌾 |
| Architectural, construction, engineering and inspection services. 👷♂️ |
| Business services: law, marketing, consulting, recruitment, printing and security. 👩💼 |
| Chemical products. 🧪 |
| Clothing, footwear, luggage articles and accessories. 👖 |
| Collected and purified water. 🌊 |
| Construction structures and materials; auxiliary products to construction (excepts electric apparatus). 🧱 |
| Construction work. 🏗️ |
| Education and training services. 👩🏿🏫 |
| Electrical machinery, apparatus, equipment and consumables; Lighting. ⚡ |
| Financial and insurance services. 👨💼 |
| Food, beverages, tobacco and related products. 🍽️ |
| Furniture (incl. office furniture), furnishings, domestic appliances (excl. lighting) and cleaning products. 🗄️ |
| Health and social work services. 👨🏽⚕️ |
| Hotel, restaurant and retail trade services. 🏨 |
| IT services: consulting, software development, Internet and support. 🖥️ |
| Industrial machinery. 🏭 |
| Installation services (except software). 🛠️ |
| Laboratory, optical and precision equipments (excl. glasses). 🔬 |
| Leather and textile fabrics, plastic and rubber materials. 🧵 |
| Machinery for mining, quarrying, construction equipment. ⛏️ |
| Medical equipments, pharmaceuticals and personal care products. 💉 |
| Mining, basic metals and related products. ⚙️ |
| Musical instruments, sport goods, games, toys, handicraft, art materials and accessories. 🎸 |
| Office and computing machinery, equipment and supplies except furniture and software packages. 🖨️ |
| Other community, social and personal services. 🧑🏽🤝🧑🏽 |
| Petroleum products, fuel, electricity and other sources of energy. 🔋 |
| Postal and telecommunications services. 📶 |
| Printed matter and related products. 📰 |
| Public utilities. ⛲ |
| Radio, television, communication, telecommunication and related equipment. 📡 |
| Real estate services. 🏠 |
| Recreational, cultural and sporting services. 🚴 |
| Repair and maintenance services. 🔧 |
| Research and development services and related consultancy services. 👩🔬 |
| Security, fire-fighting, police and defence equipment. 🧯 |
| Services related to the oil and gas industry. ⛽ |
| Sewage-, refuse-, cleaning-, and environmental services. 🧹 |
| Software package and information systems. 🔣 |
| Supporting and auxiliary transport services; travel agencies services. 🚃 |
| Transport equipment and auxiliary products to transportation. 🚌 |
| Transport services (excl. Waste transport). 💺
## Intended uses & limitations
- Input description should be written in any of [the 104 languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) that MBERT supports.
- The model is just evaluated in 22 languages. Thus there is no information about the performances in other languages.
- The domain is also restricted by the awarded procurement notice descriptions in European Union. Evaluating on whole document texts might change the performance.
## Training and evaluation data
- The whole data consists of 744,360 rows. Shuffled and split into train and validation sets by using 80%/20% manner.
- Each description represents a unique contract notice description awarded between 2011 and 2018.
- Both training and validation data have contract notice descriptions written in 22 European Languages. (Malta and Irish are extracted due to scarcity compared to whole data)
## Training procedure
The training procedure has been completed on Google Cloud V3-8 TPUs. Thanks [Google](https://sites.research.google/trc/about/) for giving the access to Cloud TPUs
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- num_epochs: 3
- gradient_accumulation_steps: 8
- batch_size_per_device: 4
- total_train_batch_size: 32
### Training results
| Epoch | Step | F1 Score|
|:-----:|:------:|:------:|
| 1 | 18,609 | 0.630 |
| 2 | 37,218 | 0.674 |
| 3 | 55,827 | 0.686 |
| Language| F1 Score| Test Size|
|:-----:|:-----:|:-----:|
| PL| 0.759| 13950|
| RO| 0.736| 3522|
| SK| 0.719| 1122|
| LT| 0.687| 2424|
| HU| 0.681| 1879|
| BG| 0.675| 2459|
| CS| 0.668| 2694|
| LV| 0.664| 836|
| DE| 0.645| 35354|
| FI| 0.644| 1898|
| ES| 0.643| 7483|
| PT| 0.631| 874|
| EN| 0.631| 16615|
| HR| 0.626| 865|
| IT| 0.626| 8035|
| NL| 0.624| 5640|
| EL| 0.623| 1724|
| SL| 0.615| 482|
| SV| 0.607| 3326|
| DA| 0.603| 1925|
| FR| 0.601| 33113|
| ET| 0.572| 458|| | {"license": "apache-2.0", "tags": ["eu", "public procurement", "cpv", "sector", "multilingual", "transformers", "text-classification"], "widget": [{"text": "Oppeg\u00e5rd municipality, hereafter called the contracting authority, intends to enter into a framework agreement with one supplier for the procurement of fresh bread and bakery products for Oppeg\u00e5rd municipality. The contract is estimated to NOK 1 400 000 per annum excluding VAT The total for the entire period including options is NOK 5 600 000 excluding VAT"}]} | MKaan/multilingual-cpv-sector-classifier | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"eu",
"public procurement",
"cpv",
"sector",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #eu #public procurement #cpv #sector #multilingual #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| multilingual-cpv-sector-classifier
==================================
This model is a fine-tuned version of bert-base-multilingual-cased on the Tenders Economic Daily Public Procurement Data.
It achieves the following results on the evaluation set:
* F1 Score: 0.686
Model description
-----------------
The model takes procurement descriptions written in any of 104 languages and classifies them into 45 sector classes represented by CPV(Common Procurement Vocabulary) code descriptions as listed below.
Intended uses & limitations
---------------------------
* Input description should be written in any of the 104 languages that MBERT supports.
* The model is just evaluated in 22 languages. Thus there is no information about the performances in other languages.
* The domain is also restricted by the awarded procurement notice descriptions in European Union. Evaluating on whole document texts might change the performance.
Training and evaluation data
----------------------------
* The whole data consists of 744,360 rows. Shuffled and split into train and validation sets by using 80%/20% manner.
* Each description represents a unique contract notice description awarded between 2011 and 2018.
* Both training and validation data have contract notice descriptions written in 22 European Languages. (Malta and Irish are extracted due to scarcity compared to whole data)
Training procedure
------------------
The training procedure has been completed on Google Cloud V3-8 TPUs. Thanks Google for giving the access to Cloud TPUs
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* num\_epochs: 3
* gradient\_accumulation\_steps: 8
* batch\_size\_per\_device: 4
* total\_train\_batch\_size: 32
### Training results
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* num\\_epochs: 3\n* gradient\\_accumulation\\_steps: 8\n* batch\\_size\\_per\\_device: 4\n* total\\_train\\_batch\\_size: 32",
"### Training results"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #eu #public procurement #cpv #sector #multilingual #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* num\\_epochs: 3\n* gradient\\_accumulation\\_steps: 8\n* batch\\_size\\_per\\_device: 4\n* total\\_train\\_batch\\_size: 32",
"### Training results"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-pnsum2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 4.3733
- Rouge2: 1.0221
- Rougel: 4.1265
- Rougelsum: 4.1372
- Gen Len: 6.2843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 2500 | nan | 4.3733 | 1.0221 | 4.1265 | 4.1372 | 6.2843 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-small-finetuned-pnsum2", "results": []}]} | MM98/mt5-small-finetuned-pnsum2 | null | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| mt5-small-finetuned-pnsum2
==========================
This model is a fine-tuned version of google/mt5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: nan
* Rouge1: 4.3733
* Rouge2: 1.0221
* Rougel: 4.1265
* Rougelsum: 4.1372
* Gen Len: 6.2843
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac-v2
This model is a fine-tuned version of [mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es](https://huggingface.co/mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es) on the sqac dataset.
It achieves the following results on the evaluation set:
- {'exact_match': 65.02145922746782, 'f1': 81.6651482773275}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9417 | 1.0 | 1277 | 0.7903 |
| 0.5002 | 2.0 | 2554 | 0.8459 |
| 0.2895 | 3.0 | 3831 | 0.9482 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["sqac"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac-v2", "results": []}]} | MMG/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"es",
"dataset:sqac",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #bert #question-answering #generated_from_trainer #es #dataset-sqac #endpoints_compatible #region-us
| bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac-v2
=====================================================================
This model is a fine-tuned version of mrm8488/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es on the sqac dataset.
It achieves the following results on the evaluation set:
* {'exact\_match': 65.02145922746782, 'f1': 81.6651482773275}
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #bert #question-answering #generated_from_trainer #es #dataset-sqac #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad
This model is a fine-tuned version of [MMG/bert-base-spanish-wwm-cased-finetuned-sqac](https://huggingface.co/MMG/bert-base-spanish-wwm-cased-finetuned-sqac) on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5325
- {'exact_match': 60.30274361400189, 'f1': 77.01962587890856}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["squad_es"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad", "results": []}]} | MMG/bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"es",
"dataset:squad_es",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-squad_es #endpoints_compatible #region-us
|
# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad
This model is a fine-tuned version of MMG/bert-base-spanish-wwm-cased-finetuned-sqac on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5325
- {'exact_match': 60.30274361400189, 'f1': 77.01962587890856}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad\n\nThis model is a fine-tuned version of MMG/bert-base-spanish-wwm-cased-finetuned-sqac on the squad_es dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.5325\n- {'exact_match': 60.30274361400189, 'f1': 77.01962587890856}",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.13.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-squad_es #endpoints_compatible #region-us \n",
"# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad\n\nThis model is a fine-tuned version of MMG/bert-base-spanish-wwm-cased-finetuned-sqac on the squad_es dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.5325\n- {'exact_match': 60.30274361400189, 'f1': 77.01962587890856}",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.13.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es
This model is a fine-tuned version of [MMG/bert-base-spanish-wwm-cased-finetuned-sqac](https://huggingface.co/MMG/bert-base-spanish-wwm-cased-finetuned-sqac) on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2584
- {'exact': 63.358070500927646, 'f1': 70.22498384623977}
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["squad_es"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es", "results": []}]} | MMG/bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"es",
"dataset:squad_es",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-squad_es #endpoints_compatible #region-us
|
# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es
This model is a fine-tuned version of MMG/bert-base-spanish-wwm-cased-finetuned-sqac on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2584
- {'exact': 63.358070500927646, 'f1': 70.22498384623977}
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es\n\nThis model is a fine-tuned version of MMG/bert-base-spanish-wwm-cased-finetuned-sqac on the squad_es dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.2584\n- {'exact': 63.358070500927646, 'f1': 70.22498384623977}",
"### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-squad_es #endpoints_compatible #region-us \n",
"# bert-base-spanish-wwm-cased-finetuned-sqac-finetuned-squad2-es\n\nThis model is a fine-tuned version of MMG/bert-base-spanish-wwm-cased-finetuned-sqac on the squad_es dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.2584\n- {'exact': 63.358070500927646, 'f1': 70.22498384623977}",
"### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-sqac
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the sqac dataset.
It achieves the following results on the evaluation set:
{'exact_match': 62.017167, 'f1': 79.452767}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1335 | 1.0 | 1230 | 0.9346 |
| 0.6794 | 2.0 | 2460 | 0.8634 |
| 0.3992 | 3.0 | 3690 | 0.9662 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["sqac"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-sqac", "results": []}]} | MMG/bert-base-spanish-wwm-cased-finetuned-sqac | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"es",
"dataset:sqac",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-sqac #endpoints_compatible #region-us
| bert-base-spanish-wwm-cased-finetuned-sqac
==========================================
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-cased on the sqac dataset.
It achieves the following results on the evaluation set:
{'exact\_match': 62.017167, 'f1': 79.452767}
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-sqac #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac
This model is a fine-tuned version of [ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es](https://huggingface.co/ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es) on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9263
- {'exact_match': 65.55793991416309, 'f1': 82.72322701572416}
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["sqac"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac", "results": []}]} | MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"es",
"dataset:sqac",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-sqac #endpoints_compatible #region-us
|
# bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac
This model is a fine-tuned version of ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es on the sqac dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9263
- {'exact_match': 65.55793991416309, 'f1': 82.72322701572416}
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"# bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac\n\nThis model is a fine-tuned version of ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es on the sqac dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.9263\n- {'exact_match': 65.55793991416309, 'f1': 82.72322701572416}",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-sqac #endpoints_compatible #region-us \n",
"# bert-base-spanish-wwm-cased-finetuned-squad2-es-finetuned-sqac\n\nThis model is a fine-tuned version of ockapuh/bert-base-spanish-wwm-cased-finetuned-squad2-es on the sqac dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.9263\n- {'exact_match': 65.55793991416309, 'f1': 82.72322701572416}",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-squad2-es
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2841
{'exact': 62.53162421993591, 'f1': 69.33421368741254}
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": ["es"], "tags": ["generated_from_trainer"], "datasets": ["squad_es"], "model-index": [{"name": "bert-base-spanish-wwm-cased-finetuned-squad2-es", "results": []}]} | MMG/bert-base-spanish-wwm-cased-finetuned-squad2-es | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"es",
"dataset:squad_es",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-squad_es #endpoints_compatible #region-us
|
# bert-base-spanish-wwm-cased-finetuned-squad2-es
This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-cased on the squad_es dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2841
{'exact': 62.53162421993591, 'f1': 69.33421368741254}
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| [
"# bert-base-spanish-wwm-cased-finetuned-squad2-es\n\nThis model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-cased on the squad_es dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.2841\n{'exact': 62.53162421993591, 'f1': 69.33421368741254}",
"### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #es #dataset-squad_es #endpoints_compatible #region-us \n",
"# bert-base-spanish-wwm-cased-finetuned-squad2-es\n\nThis model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-cased on the squad_es dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.2841\n{'exact': 62.53162421993591, 'f1': 69.33421368741254}",
"### Framework versions\n\n- Transformers 4.14.1\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
fill-mask | transformers |
# mlm-spanish-roberta-base
This model has a RoBERTa base architecture and was trained from scratch with 3.6 GB of raw text over 10 epochs. 4 Tesla V-100 GPUs were used for the training.
To test the quality of the resulting model we evaluate it over the [GLUES](https://github.com/dccuchile/GLUES) benchmark for Spanish NLU. The results are the following:
| Task | Score (metric) |
|:-----------------------:|:---------------------:|
| XNLI | 71.99 (accuracy) |
| Paraphrasing | 74.85 (accuracy) |
| NER | 85.34 (F1) |
| POS | 97.49 (accuracy) |
| Dependency Parsing | 85.14/81.08 (UAS/LAS) |
| Document Classification | 93.00 (accuracy) |
| {"language": ["es"], "widget": [{"text": "MMG se dedica a la <mask> artificial."}]} | MMG/mlm-spanish-roberta-base | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #es #autotrain_compatible #endpoints_compatible #region-us
| mlm-spanish-roberta-base
========================
This model has a RoBERTa base architecture and was trained from scratch with 3.6 GB of raw text over 10 epochs. 4 Tesla V-100 GPUs were used for the training.
To test the quality of the resulting model we evaluate it over the GLUES benchmark for Spanish NLU. The results are the following:
| [] | [
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #es #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification | transformers |
# xlm-roberta-large-ner-spanish
This model is a XLM-Roberta-large model fine-tuned for Named Entity Recognition (NER) over the Spanish portion of the CoNLL-2002 dataset. Evaluating it over the test subset of this dataset, we get a F1-score of 89.17, being one of the best NER for Spanish available at the moment. | {"language": ["es"], "datasets": ["CoNLL-2002"], "widget": [{"text": "Las oficinas de MMG est\u00e1n en Las Rozas."}]} | MMG/xlm-roberta-large-ner-spanish | null | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"es",
"dataset:CoNLL-2002",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es"
] | TAGS
#transformers #pytorch #safetensors #xlm-roberta #token-classification #es #dataset-CoNLL-2002 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# xlm-roberta-large-ner-spanish
This model is a XLM-Roberta-large model fine-tuned for Named Entity Recognition (NER) over the Spanish portion of the CoNLL-2002 dataset. Evaluating it over the test subset of this dataset, we get a F1-score of 89.17, being one of the best NER for Spanish available at the moment. | [
"# xlm-roberta-large-ner-spanish\n\nThis model is a XLM-Roberta-large model fine-tuned for Named Entity Recognition (NER) over the Spanish portion of the CoNLL-2002 dataset. Evaluating it over the test subset of this dataset, we get a F1-score of 89.17, being one of the best NER for Spanish available at the moment."
] | [
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #token-classification #es #dataset-CoNLL-2002 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# xlm-roberta-large-ner-spanish\n\nThis model is a XLM-Roberta-large model fine-tuned for Named Entity Recognition (NER) over the Spanish portion of the CoNLL-2002 dataset. Evaluating it over the test subset of this dataset, we get a F1-score of 89.17, being one of the best NER for Spanish available at the moment."
] |
null | null | # Description
A pre-trained model for volumetric (3D) segmentation of the spleen from CT image.
# Model Overview
This model is trained using the runner-up [1] awarded pipeline of the "Medical Segmentation Decathlon Challenge 2018" using the UNet architecture [2] with 32 training images and 9 validation images.
## Data
The training dataset is Task09_Spleen.tar from http://medicaldecathlon.com/.
## Training configuration
The training was performed with at least 12GB-memory GPUs.
Actual Model Input: 96 x 96 x 96
## Input and output formats
Input: 1 channel CT image
Output: 2 channels: Label 1: spleen; Label 0: everything else
## Scores
This model achieves the following Dice score on the validation data (our own split from the training dataset):
Mean Dice = 0.96
## commands example
Execute inference:
```
python -m monai.bundle run evaluator --meta_file configs/metadata.json --config_file configs/inference.json --logging_file configs/logging.conf
```
Verify the metadata format:
```
python -m monai.bundle verify_metadata --meta_file configs/metadata.json --filepath eval/schema.json
```
Verify the data shape of network:
```
python -m monai.bundle verify_net_in_out network_def --meta_file configs/metadata.json --config_file configs/inference.json
```
Export checkpoint to TorchScript file:
```
python -m monai.bundle export network_def --filepath models/model.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json
```
# Disclaimer
This is an example, not to be used for diagnostic purposes.
# References
[1] Xia, Yingda, et al. "3D Semi-Supervised Learning with Uncertainty-Aware Multi-View Co-Training." arXiv preprint arXiv:1811.12506 (2018). https://arxiv.org/abs/1811.12506.
[2] Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. https://doi.org/10.1007/978-3-030-12029-0_40
| {"tags": ["monai"]} | MONAI/example_spleen_segmentation | null | [
"monai",
"arxiv:1811.12506",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1811.12506"
] | [] | TAGS
#monai #arxiv-1811.12506 #region-us
| # Description
A pre-trained model for volumetric (3D) segmentation of the spleen from CT image.
# Model Overview
This model is trained using the runner-up [1] awarded pipeline of the "Medical Segmentation Decathlon Challenge 2018" using the UNet architecture [2] with 32 training images and 9 validation images.
## Data
The training dataset is Task09_Spleen.tar from URL
## Training configuration
The training was performed with at least 12GB-memory GPUs.
Actual Model Input: 96 x 96 x 96
## Input and output formats
Input: 1 channel CT image
Output: 2 channels: Label 1: spleen; Label 0: everything else
## Scores
This model achieves the following Dice score on the validation data (our own split from the training dataset):
Mean Dice = 0.96
## commands example
Execute inference:
Verify the metadata format:
Verify the data shape of network:
Export checkpoint to TorchScript file:
# Disclaimer
This is an example, not to be used for diagnostic purposes.
# References
[1] Xia, Yingda, et al. "3D Semi-Supervised Learning with Uncertainty-Aware Multi-View Co-Training." arXiv preprint arXiv:1811.12506 (2018). URL
[2] Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. URL
| [
"# Description\r\nA pre-trained model for volumetric (3D) segmentation of the spleen from CT image.",
"# Model Overview\r\nThis model is trained using the runner-up [1] awarded pipeline of the \"Medical Segmentation Decathlon Challenge 2018\" using the UNet architecture [2] with 32 training images and 9 validation images.",
"## Data\r\nThe training dataset is Task09_Spleen.tar from URL",
"## Training configuration\r\nThe training was performed with at least 12GB-memory GPUs.\r\n\r\nActual Model Input: 96 x 96 x 96",
"## Input and output formats\r\nInput: 1 channel CT image\r\n\r\nOutput: 2 channels: Label 1: spleen; Label 0: everything else",
"## Scores\r\nThis model achieves the following Dice score on the validation data (our own split from the training dataset):\r\n\r\nMean Dice = 0.96",
"## commands example\r\nExecute inference:\r\n\r\n\r\n\r\nVerify the metadata format:\r\n\r\n\r\n\r\nVerify the data shape of network:\r\n\r\n\r\n\r\nExport checkpoint to TorchScript file:",
"# Disclaimer\r\nThis is an example, not to be used for diagnostic purposes.",
"# References\r\n[1] Xia, Yingda, et al. \"3D Semi-Supervised Learning with Uncertainty-Aware Multi-View Co-Training.\" arXiv preprint arXiv:1811.12506 (2018). URL\r\n\r\n[2] Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. URL"
] | [
"TAGS\n#monai #arxiv-1811.12506 #region-us \n",
"# Description\r\nA pre-trained model for volumetric (3D) segmentation of the spleen from CT image.",
"# Model Overview\r\nThis model is trained using the runner-up [1] awarded pipeline of the \"Medical Segmentation Decathlon Challenge 2018\" using the UNet architecture [2] with 32 training images and 9 validation images.",
"## Data\r\nThe training dataset is Task09_Spleen.tar from URL",
"## Training configuration\r\nThe training was performed with at least 12GB-memory GPUs.\r\n\r\nActual Model Input: 96 x 96 x 96",
"## Input and output formats\r\nInput: 1 channel CT image\r\n\r\nOutput: 2 channels: Label 1: spleen; Label 0: everything else",
"## Scores\r\nThis model achieves the following Dice score on the validation data (our own split from the training dataset):\r\n\r\nMean Dice = 0.96",
"## commands example\r\nExecute inference:\r\n\r\n\r\n\r\nVerify the metadata format:\r\n\r\n\r\n\r\nVerify the data shape of network:\r\n\r\n\r\n\r\nExport checkpoint to TorchScript file:",
"# Disclaimer\r\nThis is an example, not to be used for diagnostic purposes.",
"# References\r\n[1] Xia, Yingda, et al. \"3D Semi-Supervised Learning with Uncertainty-Aware Multi-View Co-Training.\" arXiv preprint arXiv:1811.12506 (2018). URL\r\n\r\n[2] Kerfoot E., Clough J., Oksuz I., Lee J., King A.P., Schnabel J.A. (2019) Left-Ventricle Quantification Using Residual U-Net. In: Pop M. et al. (eds) Statistical Atlases and Computational Models of the Heart. Atrial Segmentation and LV Quantification Challenges. STACOM 2018. Lecture Notes in Computer Science, vol 11395. Springer, Cham. URL"
] |
text-generation | transformers |
# Vision DialoGPT Model | {"tags": ["conversational"]} | MS366/DialoGPT-small-vision | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Vision DialoGPT Model | [
"# Vision DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Vision DialoGPT Model"
] |
text2text-generation | transformers | #### Languages:
- Source language: English
- Source language: isiZulu
#### Model Details:
- model: transformer
- Architecture: MarianMT
- pre-processing: normalization + SentencePiece
#### Pre-trained Model:
- https://huggingface.co/Helsinki-NLP/opus-mt-en-xh
#### Corpus:
- Umsuka English-isiZulu Parallel Corpus (https://zenodo.org/record/5035171#.Yh5NIOhBy3A)
#### Benchmark:
| Benchmark | Train | Test |
|-----------|-------|-------|
| Umsuka | 17.61 | 13.73 |
#### GitHub:
- https://github.com/umair-nasir14/Geographical-Distance-Is-The-New-Hyperparameter
#### Citation:
```
@article{umair2022geographical,
title={Geographical Distance Is The New Hyperparameter: A Case Study Of Finding The Optimal Pre-trained Language For English-isiZulu Machine Translation},
author={Umair Nasir, Muhammad and Amos Mchechesi, Innocent},
journal={arXiv e-prints},
pages={arXiv--2205},
year={2022}
}
```
| {} | MUNasir/umsuka-en-zu | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| #### Languages:
* Source language: English
* Source language: isiZulu
#### Model Details:
* model: transformer
* Architecture: MarianMT
* pre-processing: normalization + SentencePiece
#### Pre-trained Model:
* URL
#### Corpus:
* Umsuka English-isiZulu Parallel Corpus (URL
#### Benchmark:
Benchmark: Umsuka, Train: 17.61, Test: 13.73
#### GitHub:
* URL
:
| [
"#### Languages:\n\n\n* Source language: English\n* Source language: isiZulu",
"#### Model Details:\n\n\n* model: transformer\n* Architecture: MarianMT\n* pre-processing: normalization + SentencePiece",
"#### Pre-trained Model:\n\n\n* URL",
"#### Corpus:\n\n\n* Umsuka English-isiZulu Parallel Corpus (URL",
"#### Benchmark:\n\n\nBenchmark: Umsuka, Train: 17.61, Test: 13.73",
"#### GitHub:\n\n\n* URL\n\n\n:"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n",
"#### Languages:\n\n\n* Source language: English\n* Source language: isiZulu",
"#### Model Details:\n\n\n* model: transformer\n* Architecture: MarianMT\n* pre-processing: normalization + SentencePiece",
"#### Pre-trained Model:\n\n\n* URL",
"#### Corpus:\n\n\n* Umsuka English-isiZulu Parallel Corpus (URL",
"#### Benchmark:\n\n\nBenchmark: Umsuka, Train: 17.61, Test: 13.73",
"#### GitHub:\n\n\n* URL\n\n\n:"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2177 | 1.0 | 5533 | 1.1565 |
| 0.9472 | 2.0 | 11066 | 1.1174 |
| 0.7634 | 3.0 | 16599 | 1.1520 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model_index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": [{"task": {"name": "Question Answering", "type": "question-answering"}, "dataset": {"name": "squad", "type": "squad", "args": "plain_text"}}]}]} | MYX4567/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1520
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu102
* Datasets 1.10.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.76 | 1.0 | 2334 | 3.6658 |
| 3.6325 | 2.0 | 4668 | 3.6454 |
| 3.6068 | 3.0 | 7002 | 3.6428 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": [], "model_index": [{"name": "distilgpt2-finetuned-wikitext2", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | MYX4567/distilgpt2-finetuned-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| distilgpt2-finetuned-wikitext2
==============================
This model is a fine-tuned version of distilgpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.6428
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu102
* Datasets 1.10.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7523 | 1.0 | 2249 | 6.6652 |
| 6.4134 | 2.0 | 4498 | 6.3987 |
| 6.2507 | 3.0 | 6747 | 6.3227 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": [], "model_index": [{"name": "gpt2-wikitext2", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]} | MYX4567/gpt2-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| gpt2-wikitext2
==============
This model is a fine-tuned version of [](URL on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 6.3227
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu102
* Datasets 1.10.2
* Tokenizers 0.10.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.10.2\n* Tokenizers 0.10.3"
] |
token-classification | transformers | bgc-accession model is a Named Entity Recognition (NER) model that identifies and annotates the accession number of biosynthetic gene clusters in texts.
The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_bgcs_annotations
Testing examples:
1. The genome sequences of Leptolyngbya sp. PCC 7375 (ALVN00000000) and G. sunshinyii YC6258 (NZ_CP007142.1) were obtained previously.36,59
2. K311 was sequenced (NCBI accession number: JN852959) and analyzed with FramePlot and 18 genes were predicted to be involved in echinomycin biosynthesis (Figure 2).
3. The mar cluster was sequenced and annotated and the complete sequence was deposited into Genbank (accession KF711829). | {} | Maaly/bgc-accession | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
| bgc-accession model is a Named Entity Recognition (NER) model that identifies and annotates the accession number of biosynthetic gene clusters in texts.
The model is a fine-tuned BioBERT model and the training dataset is available in URL
Testing examples:
1. The genome sequences of Leptolyngbya sp. PCC 7375 (ALVN00000000) and G. sunshinyii YC6258 (NZ_CP007142.1) were obtained previously.36,59
2. K311 was sequenced (NCBI accession number: JN852959) and analyzed with FramePlot and 18 genes were predicted to be involved in echinomycin biosynthesis (Figure 2).
3. The mar cluster was sequenced and annotated and the complete sequence was deposited into Genbank (accession KF711829). | [] | [
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification | transformers | body-site model is a Named Entity Recognition (NER) model that identifies and annotates the body-site of microbiome samples in texts.
The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_metagenomics_annotations
Testing examples:
1. Scalp hair was collected from behind the right ear, near the right retroauricular crease, and pubic hair was collected from their right pubis, near the right inguinal crease.
2. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens.
3. TSO modulate the IEC and LPMC transcriptome To gain further insights into the mechanisms of TSO treatment, we performed genome wide expression analysis on intestinal epithelial cells (IEC) and lamina propria mononuclear cells (LPMC) isolated from caecum samples by RNA sequencing (RNAseq).
4. Two catheters were bilaterally placed in the CA1 region of the hippocampus with the coordinates of 4.5 mm anterior to bregma, 1.6 mm ventral to the dura, and two directions of ± 4.0 mm from the interaural line (Park et al. 2013; Yang et al. 2013). | {} | Maaly/body-site | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
| body-site model is a Named Entity Recognition (NER) model that identifies and annotates the body-site of microbiome samples in texts.
The model is a fine-tuned BioBERT model and the training dataset is available in URL
Testing examples:
1. Scalp hair was collected from behind the right ear, near the right retroauricular crease, and pubic hair was collected from their right pubis, near the right inguinal crease.
2. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens.
3. TSO modulate the IEC and LPMC transcriptome To gain further insights into the mechanisms of TSO treatment, we performed genome wide expression analysis on intestinal epithelial cells (IEC) and lamina propria mononuclear cells (LPMC) isolated from caecum samples by RNA sequencing (RNAseq).
4. Two catheters were bilaterally placed in the CA1 region of the hippocampus with the coordinates of 4.5 mm anterior to bregma, 1.6 mm ventral to the dura, and two directions of ± 4.0 mm from the interaural line (Park et al. 2013; Yang et al. 2013). | [] | [
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification | transformers | host model is a Named Entity Recognition (NER) model that identifies and annotates the host (living organism) of microbiome samples in texts.
The model is a fine-tuned BioBERT model and the training dataset is available in https://gitlab.com/maaly7/emerald_metagenomics_annotations
Testing examples:
1. Turkestan cockroach nymphs (Finke, 2013) were fed to the treefrogs at a quantity of 10% of treefrog biomass twice a week.
2. Samples were collected from clinically healthy giant pandas (five females and four males) at the China Conservation and Research Center for Giant Pandas (Ya'an, China).
3. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens.
| {} | Maaly/host | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
| host model is a Named Entity Recognition (NER) model that identifies and annotates the host (living organism) of microbiome samples in texts.
The model is a fine-tuned BioBERT model and the training dataset is available in URL
Testing examples:
1. Turkestan cockroach nymphs (Finke, 2013) were fed to the treefrogs at a quantity of 10% of treefrog biomass twice a week.
2. Samples were collected from clinically healthy giant pandas (five females and four males) at the China Conservation and Research Center for Giant Pandas (Ya'an, China).
3. Field-collected bee samples were dissected on dry ice and separated into head, thorax (excluding legs and wings), and abdomens.
| [] | [
"TAGS\n#transformers #pytorch #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation | transformers |
#Harry Potter DialoGPT Model | {"tags": ["conversational"]} | MadhanKumar/DialoGPT-small-HarryPotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Harry Potter DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
#Harry Potter Bot Model | {"tags": ["conversational"]} | MadhanKumar/HarryPotter-Bot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Harry Potter Bot Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 515314387
- CO2 Emissions (in grams): 70.95647633212745
## Validation Metrics
- Loss: 0.08077705651521683
- Accuracy: 0.9760103738923709
- Macro F1: 0.9728412857204902
- Micro F1: 0.9760103738923709
- Weighted F1: 0.9759907151741426
- Macro Precision: 0.9736622407675567
- Micro Precision: 0.9760103738923709
- Weighted Precision: 0.97673611876005
- Macro Recall: 0.9728978421381711
- Micro Recall: 0.9760103738923709
- Weighted Recall: 0.9760103738923709
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` | {"language": "en", "tags": "autonlp", "datasets": ["MadhurJindalWorkMail/autonlp-data-Gibb-Detect"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 70.95647633212745} | MadhurJindalWorkMail/autonlp-Gibb-Detect-515314387 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:MadhurJindalWorkMail/autonlp-data-Gibb-Detect",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-MadhurJindalWorkMail/autonlp-data-Gibb-Detect #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 515314387
- CO2 Emissions (in grams): 70.95647633212745
## Validation Metrics
- Loss: 0.08077705651521683
- Accuracy: 0.9760103738923709
- Macro F1: 0.9728412857204902
- Micro F1: 0.9760103738923709
- Weighted F1: 0.9759907151741426
- Macro Precision: 0.9736622407675567
- Micro Precision: 0.9760103738923709
- Weighted Precision: 0.97673611876005
- Macro Recall: 0.9728978421381711
- Micro Recall: 0.9760103738923709
- Weighted Recall: 0.9760103738923709
## Usage
You can use cURL to access this model:
Or Python API:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 515314387\n- CO2 Emissions (in grams): 70.95647633212745",
"## Validation Metrics\n\n- Loss: 0.08077705651521683\n- Accuracy: 0.9760103738923709\n- Macro F1: 0.9728412857204902\n- Micro F1: 0.9760103738923709\n- Weighted F1: 0.9759907151741426\n- Macro Precision: 0.9736622407675567\n- Micro Precision: 0.9760103738923709\n- Weighted Precision: 0.97673611876005\n- Macro Recall: 0.9728978421381711\n- Micro Recall: 0.9760103738923709\n- Weighted Recall: 0.9760103738923709",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-MadhurJindalWorkMail/autonlp-data-Gibb-Detect #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 515314387\n- CO2 Emissions (in grams): 70.95647633212745",
"## Validation Metrics\n\n- Loss: 0.08077705651521683\n- Accuracy: 0.9760103738923709\n- Macro F1: 0.9728412857204902\n- Micro F1: 0.9760103738923709\n- Weighted F1: 0.9759907151741426\n- Macro Precision: 0.9736622407675567\n- Micro Precision: 0.9760103738923709\n- Weighted Precision: 0.97673611876005\n- Macro Recall: 0.9728978421381711\n- Micro Recall: 0.9760103738923709\n- Weighted Recall: 0.9760103738923709",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
automatic-speech-recognition | transformers | # WIP
| {} | Mads/wav2vec2-xlsr-large-53-kor-financial-engineering | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
| # WIP
| [
"# WIP"
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n",
"# WIP"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.0001
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.0 | 2 | 5.3843 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "deberta-base-finetuned-squad", "results": []}]} | MaggieXM/deberta-base-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"deberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #deberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us
| deberta-base-finetuned-squad
============================
This model is a fine-tuned version of microsoft/deberta-base on the squad dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 0.0001
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.0001",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #deberta #question-answering #generated_from_trainer #dataset-squad #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.0001",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.01
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.01 | 56 | 4.8054 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | MaggieXM/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 0.01
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.01",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 0.01",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-generation | transformers | #Rick Sanchez DialoGPT Model | {"tags": "conversational"} | MagmaCubes1133/DialoGPT-large-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| #Rick Sanchez DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
automatic-speech-recognition | transformers |
#xlsr-large-53-tamil | {"language": ["ne"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["openslr"], "model-index": [{"name": "wav2vec2-large-xlsr-53-tamil", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "openslr", "type": "openslr", "args": "ne"}, "metrics": [{"type": "wer", "value": 25.02, "name": "Test WER"}]}]}]} | Mahalakshmi/wav2vec2-large-xlsr-53-demo-colab | null | [
"transformers",
"pytorch",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"ne",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ne"
] | TAGS
#transformers #pytorch #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #ne #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
#xlsr-large-53-tamil | [] | [
"TAGS\n#transformers #pytorch #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #ne #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9475
- eval_wer: 1.0377
- eval_runtime: 70.5646
- eval_samples_per_second: 25.239
- eval_steps_per_second: 3.16
- epoch: 21.05
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-demo-colab", "results": []}]} | Mahalakshmi/wav2vec2-xls-r-300m-demo-colab | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-xls-r-300m-demo-colab
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9475
- eval_wer: 1.0377
- eval_runtime: 70.5646
- eval_samples_per_second: 25.239
- eval_steps_per_second: 3.16
- epoch: 21.05
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 300
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
| [
"# wav2vec2-xls-r-300m-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.9475\n- eval_wer: 1.0377\n- eval_runtime: 70.5646\n- eval_samples_per_second: 25.239\n- eval_steps_per_second: 3.16\n- epoch: 21.05\n- step: 2000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 300\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-xls-r-300m-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.9475\n- eval_wer: 1.0377\n- eval_runtime: 70.5646\n- eval_samples_per_second: 25.239\n- eval_steps_per_second: 3.16\n- epoch: 21.05\n- step: 2000",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 300\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.18.2.dev0\n- Tokenizers 0.11.0"
] |
null | null | testing for nothing
| {} | Mahmoud97/Temp | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| testing for nothing
| [] | [
"TAGS\n#region-us \n"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Persian-Image-Captioning
This model is a fine-tuned version of [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) on coco-flickr-farsi.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"]} | MahsaShahidi/Persian-Image-Captioning | null | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"generated_from_trainer",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #vision-encoder-decoder #generated_from_trainer #endpoints_compatible #has_space #region-us
|
# Persian-Image-Captioning
This model is a fine-tuned version of Vision Encoder Decoder on coco-flickr-farsi.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| [
"# Persian-Image-Captioning\n\nThis model is a fine-tuned version of Vision Encoder Decoder on coco-flickr-farsi.",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.9.1\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] | [
"TAGS\n#transformers #pytorch #vision-encoder-decoder #generated_from_trainer #endpoints_compatible #has_space #region-us \n",
"# Persian-Image-Captioning\n\nThis model is a fine-tuned version of Vision Encoder Decoder on coco-flickr-farsi.",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.9.1\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
text-generation | transformers | ----
tags:
- conversational
---
#Peter Parker DialoGPT Model | {} | MaiaMaiaMaia/DialoGPT-medium-PeterParkerBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| ----
tags:
- conversational
---
#Peter Parker DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask | transformers | This model trained on nyanja dataset in Longformer | {} | MalawiUniST/ISO6392.nya.ny | null | [
"transformers",
"pytorch",
"longformer",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #longformer #fill-mask #autotrain_compatible #endpoints_compatible #region-us
| This model trained on nyanja dataset in Longformer | [] | [
"TAGS\n#transformers #pytorch #longformer #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null | Ver-Online Malignant PELICULA completa En Espanol Latino HD | {} | Malignant/Malignant | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Ver-Online Malignant PELICULA completa En Espanol Latino HD | [] | [
"TAGS\n#region-us \n"
] |
token-classification | transformers |
# Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load the finetuned Ælæctra-cased model for Named Entity Recognition in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased-ner-dane")
model = AutoModelForTokenClassification.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased-ner-dane")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20ÆlæctraCasedNER) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "mit", "tags": ["\u00e6l\u00e6ctra", "pytorch", "danish", "ELECTRA-Small", "replaced token detection"], "datasets": ["DAGW"], "metrics": ["f1"], "widget": [{"text": "Chili Jensen, som bor p\u00e5 Danmarksgade 12, k\u00f8ber chilifrugter fra Netto."}]} | Maltehb/aelaectra-danish-electra-small-cased-ner-dane | null | [
"transformers",
"pytorch",
"tf",
"electra",
"token-classification",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2003.10555",
"1810.04805",
"2005.03521"
] | [
"da"
] | TAGS
#transformers #pytorch #tf #electra #token-classification #ælæctra #danish #ELECTRA-Small #replaced token detection #da #dataset-DAGW #arxiv-2003.10555 #arxiv-1810.04805 #arxiv-2005.03521 #license-mit #autotrain_compatible #endpoints_compatible #region-us
| Ælæctra - Finetuned for Named Entity Recognition on the DaNE dataset (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
=========================================================================================================================
Ælæctra is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.
Here is an example on how to load the finetuned Ælæctra-cased model for Named Entity Recognition in PyTorch using the Transformers library:
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
On DaNE (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the fine-tuning notebooks
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL
Danish BERT. (2020). BotXO. URL (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL
#### Acknowledgements
As the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="URL />](URL)
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="URL />](URL)
[<img align="left" alt="MalteHB | Instagram" width="22px" src="URL />](URL)
| [
"### Evaluation of current Danish Language Models\n\n\nÆlæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:\n\n\n\nOn DaNE (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.",
"### Pretraining\n\n\nTo pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks\n\n\nThe pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model",
"### Fine-tuning\n\n\nTo fine-tune any Ælæctra model follow the fine-tuning notebooks",
"### References\n\n\nClark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL\n\n\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\n\nDevlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL\n\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL\n\n\nStrømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL",
"#### Acknowledgements\n\n\nAs the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.\n\n\nA Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).\n\n\nFurthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.\n\n\nLastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!",
"#### Contact\n\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />](URL)"
] | [
"TAGS\n#transformers #pytorch #tf #electra #token-classification #ælæctra #danish #ELECTRA-Small #replaced token detection #da #dataset-DAGW #arxiv-2003.10555 #arxiv-1810.04805 #arxiv-2005.03521 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Evaluation of current Danish Language Models\n\n\nÆlæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:\n\n\n\nOn DaNE (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.",
"### Pretraining\n\n\nTo pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks\n\n\nThe pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model",
"### Fine-tuning\n\n\nTo fine-tune any Ælæctra model follow the fine-tuning notebooks",
"### References\n\n\nClark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL\n\n\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\n\nDevlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL\n\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL\n\n\nStrømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL",
"#### Acknowledgements\n\n\nAs the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.\n\n\nA Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).\n\n\nFurthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.\n\n\nLastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!",
"#### Contact\n\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />](URL)"
] |
null | transformers |
# Ælæctra - A Step Towards More Efficient Danish Natural Language Processing
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load both the cased and the uncased Ælæctra model in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-danish-electra-small-cased")
```
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/-l-ctra/blob/master/infrastructure/Dockerfile). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/-l-ctra/blob/master/notebooks/pretraining/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/-l-ctra/blob/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20Ælæctra) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "mit", "tags": ["\u00e6l\u00e6ctra", "pytorch", "danish", "ELECTRA-Small", "replaced token detection"], "datasets": ["DAGW"], "metrics": ["f1"], "co2_eq_emissions": 4009.5} | Maltehb/aelaectra-danish-electra-small-cased | null | [
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2003.10555",
"1810.04805",
"2005.03521"
] | [
"da"
] | TAGS
#transformers #pytorch #tf #electra #pretraining #ælæctra #danish #ELECTRA-Small #replaced token detection #da #dataset-DAGW #arxiv-2003.10555 #arxiv-1810.04805 #arxiv-2005.03521 #license-mit #co2_eq_emissions #endpoints_compatible #region-us
| Ælæctra - A Step Towards More Efficient Danish Natural Language Processing
==========================================================================
Ælæctra is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.
Here is an example on how to load both the cased and the uncased Ælæctra model in PyTorch using the Transformers library:
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
On DaNE (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the fine-tuning notebooks
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL
Danish BERT. (2020). BotXO. URL (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL
#### Acknowledgements
As the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="URL />](URL)
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="URL />](URL)
[<img align="left" alt="MalteHB | Instagram" width="22px" src="URL />](URL)
| [
"### Evaluation of current Danish Language Models\n\n\nÆlæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:\n\n\n\nOn DaNE (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.",
"### Pretraining\n\n\nTo pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks\n\n\nThe pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model",
"### Fine-tuning\n\n\nTo fine-tune any Ælæctra model follow the fine-tuning notebooks",
"### References\n\n\nClark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL\n\n\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\n\nDevlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL\n\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL\n\n\nStrømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL",
"#### Acknowledgements\n\n\nAs the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.\n\n\nA Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).\n\n\nFurthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.\n\n\nLastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!",
"#### Contact\n\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />](URL)"
] | [
"TAGS\n#transformers #pytorch #tf #electra #pretraining #ælæctra #danish #ELECTRA-Small #replaced token detection #da #dataset-DAGW #arxiv-2003.10555 #arxiv-1810.04805 #arxiv-2005.03521 #license-mit #co2_eq_emissions #endpoints_compatible #region-us \n",
"### Evaluation of current Danish Language Models\n\n\nÆlæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:\n\n\n\nOn DaNE (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.",
"### Pretraining\n\n\nTo pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks\n\n\nThe pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model",
"### Fine-tuning\n\n\nTo fine-tune any Ælæctra model follow the fine-tuning notebooks",
"### References\n\n\nClark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL\n\n\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\n\nDevlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL\n\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL\n\n\nStrømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL",
"#### Acknowledgements\n\n\nAs the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.\n\n\nA Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).\n\n\nFurthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.\n\n\nLastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!",
"#### Contact\n\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />](URL)"
] |
token-classification | transformers |
# Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load the finetuned Ælæctra-uncased model for Named Entity Recognition in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane")
model = AutoModelForTokenClassification.from_pretrained("Maltehb/-l-ctra-danish-electra-small-uncased-ner-dane")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20ÆlæctraUncasedNER) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "mit", "tags": ["\u00e6l\u00e6ctra", "pytorch", "danish", "ELECTRA-Small", "replaced token detection"], "datasets": ["DAGW"], "metrics": ["f1"], "widget": [{"text": "Chili Jensen, som bor p\u00e5 Danmarksgade 12, k\u00f8ber chilifrugter fra Netto."}]} | Maltehb/aelaectra-danish-electra-small-uncased-ner-dane | null | [
"transformers",
"pytorch",
"tf",
"electra",
"token-classification",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2003.10555",
"1810.04805",
"2005.03521"
] | [
"da"
] | TAGS
#transformers #pytorch #tf #electra #token-classification #ælæctra #danish #ELECTRA-Small #replaced token detection #da #dataset-DAGW #arxiv-2003.10555 #arxiv-1810.04805 #arxiv-2005.03521 #license-mit #autotrain_compatible #endpoints_compatible #region-us
| Ælæctra - Finetuned for Named Entity Recognition on the DaNE dataset (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
=========================================================================================================================
Ælæctra is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.
Here is an example on how to load the finetuned Ælæctra-uncased model for Named Entity Recognition in PyTorch using the Transformers library:
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
On DaNE (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the fine-tuning notebooks
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL
Danish BERT. (2020). BotXO. URL (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL
#### Acknowledgements
As the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="URL />](URL)
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="URL />](URL)
[<img align="left" alt="MalteHB | Instagram" width="22px" src="URL />](URL)
| [
"### Evaluation of current Danish Language Models\n\n\nÆlæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:\n\n\n\nOn DaNE (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.",
"### Pretraining\n\n\nTo pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks\n\n\nThe pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model",
"### Fine-tuning\n\n\nTo fine-tune any Ælæctra model follow the fine-tuning notebooks",
"### References\n\n\nClark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL\n\n\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\n\nDevlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL\n\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL\n\n\nStrømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL",
"#### Acknowledgements\n\n\nAs the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.\n\n\nA Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).\n\n\nFurthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.\n\n\nLastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!",
"#### Contact\n\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />](URL)"
] | [
"TAGS\n#transformers #pytorch #tf #electra #token-classification #ælæctra #danish #ELECTRA-Small #replaced token detection #da #dataset-DAGW #arxiv-2003.10555 #arxiv-1810.04805 #arxiv-2005.03521 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Evaluation of current Danish Language Models\n\n\nÆlæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:\n\n\n\nOn DaNE (Hvingelby et al., 2020) without the *MISC-tag*, Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate.",
"### Pretraining\n\n\nTo pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks\n\n\nThe pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model",
"### Fine-tuning\n\n\nTo fine-tune any Ælæctra model follow the fine-tuning notebooks",
"### References\n\n\nClark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL\n\n\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\n\nDevlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL\n\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL\n\n\nStrømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL",
"#### Acknowledgements\n\n\nAs the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.\n\n\nA Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).\n\n\nFurthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.\n\n\nLastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!",
"#### Contact\n\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />](URL)"
] |
null | transformers |
# Ælæctra - A Step Towards More Efficient Danish Natural Language Processing
**Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.🙂
Here is an example on how to load both the cased and the uncased Ælæctra model in [PyTorch](https://pytorch.org/) using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-cased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-cased")
```
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/-l-ctra-uncased")
model = AutoModelForPreTraining.from_pretrained("Maltehb/-l-ctra-uncased")
```
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
| Model | Layers | Hidden Size | Params | AVG NER micro-f1 (DaNE-testset) | Average Inference Time (Sec/Epoch) | Download |
| --- | --- | --- | --- | --- | --- | --- |
| Ælæctra Uncased | 12 | 256 | 13.7M | 78.03 (SD = 1.28) | 10.91 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| Ælæctra Cased | 12 | 256 | 14.7M | 80.08 (SD = 0.26) | 10.92 | [Link for model](https://www.dropbox.com/s/cag7prs1nvdchqs/%C3%86l%C3%A6ctra.zip?dl=0) |
| DaBERT | 12 | 768 | 110M | 84.89 (SD = 0.64) | 43.03 | [Link for model](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1) |
| mBERT Uncased | 12 | 768 | 167M | 80.44 (SD = 0.82) | 72.10 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip) |
| mBERT Cased | 12 | 768 | 177M | 83.79 (SD = 0.91) | 70.56 | [Link for model](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip) |
On [DaNE](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the [Dockerfile](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/). Next, simply follow the [pretraining notebooks](https://github.com/MalteHB/Ælæctra/tree/master/infrastructure/Dockerfile/)
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company [KMD](https://www.kmd.dk/). The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the [fine-tuning notebooks](https://github.com/MalteHB/Ælæctra/tree/master/notebooks/fine-tuning/)
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. http://arxiv.org/abs/2003.10555
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. http://arxiv.org/abs/1810.04805
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. http://arxiv.org/abs/2005.03521
#### Acknowledgements
As the majority of this repository is build upon [the works](https://github.com/google-research/electra) by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor [Riccardo Fusaroli](https://github.com/fusaroli) for the support with the thesis, and a special thanks goes out to [Kenneth Enevoldsen](https://github.com/KennethEnevoldsen) for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20Ælæctra) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "mit", "tags": ["\u00e6l\u00e6ctra", "pytorch", "danish", "ELECTRA-Small", "replaced token detection"], "datasets": ["DAGW"], "metrics": ["f1"], "co2_eq_emissions": 4009.5} | Maltehb/aelaectra-danish-electra-small-uncased | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"ælæctra",
"danish",
"ELECTRA-Small",
"replaced token detection",
"da",
"dataset:DAGW",
"arxiv:2003.10555",
"arxiv:1810.04805",
"arxiv:2005.03521",
"license:mit",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2003.10555",
"1810.04805",
"2005.03521"
] | [
"da"
] | TAGS
#transformers #pytorch #electra #pretraining #ælæctra #danish #ELECTRA-Small #replaced token detection #da #dataset-DAGW #arxiv-2003.10555 #arxiv-1810.04805 #arxiv-2005.03521 #license-mit #co2_eq_emissions #endpoints_compatible #region-us
| Ælæctra - A Step Towards More Efficient Danish Natural Language Processing
==========================================================================
Ælæctra is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models. Initially a cased and an uncased model are released. It was created as part of a Cognitive Science bachelor's thesis.
Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
Ælæctra was, as mentioned, created to enhance the Danish NLP capabilties and please do note how this GitHub still does not support the Danish characters "*Æ, Ø and Å*" as the title of this repository becomes "*-l-ctra*". How ironic.
Here is an example on how to load both the cased and the uncased Ælæctra model in PyTorch using the Transformers library:
### Evaluation of current Danish Language Models
Ælæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:
On DaNE (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.
### Pretraining
To pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks
The pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model
### Fine-tuning
To fine-tune any Ælæctra model follow the fine-tuning notebooks
### References
Clark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL
Danish BERT. (2020). BotXO. URL (Original work published 2019)
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL
Strømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL
#### Acknowledgements
As the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.
A Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).
Furthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.
Lastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="URL />](URL)
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="URL />](URL)
[<img align="left" alt="MalteHB | Instagram" width="22px" src="URL />](URL)
| [
"### Evaluation of current Danish Language Models\n\n\nÆlæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:\n\n\n\nOn DaNE (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.",
"### Pretraining\n\n\nTo pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks\n\n\nThe pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model",
"### Fine-tuning\n\n\nTo fine-tune any Ælæctra model follow the fine-tuning notebooks",
"### References\n\n\nClark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL\n\n\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\n\nDevlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL\n\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL\n\n\nStrømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL",
"#### Acknowledgements\n\n\nAs the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.\n\n\nA Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).\n\n\nFurthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.\n\n\nLastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!",
"#### Contact\n\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />](URL)"
] | [
"TAGS\n#transformers #pytorch #electra #pretraining #ælæctra #danish #ELECTRA-Small #replaced token detection #da #dataset-DAGW #arxiv-2003.10555 #arxiv-1810.04805 #arxiv-2005.03521 #license-mit #co2_eq_emissions #endpoints_compatible #region-us \n",
"### Evaluation of current Danish Language Models\n\n\nÆlæctra, Danish BERT (DaBERT) and multilingual BERT (mBERT) were evaluated:\n\n\n\nOn DaNE (Hvingelby et al., 2020), Ælæctra scores slightly worse than both cased and uncased Multilingual BERT (Devlin et al., 2019) and Danish BERT (Danish BERT, 2019/2020), however, Ælæctra is less than one third the size, and uses significantly fewer computational resources to pretrain and instantiate. For a full description of the evaluation and specification of the model read the thesis: 'Ælæctra - A Step Towards More Efficient Danish Natural Language Processing'.",
"### Pretraining\n\n\nTo pretrain Ælæctra it is recommended to build a Docker Container from the Dockerfile. Next, simply follow the pretraining notebooks\n\n\nThe pretraining was done by utilizing a single NVIDIA Tesla V100 GPU with 16 GiB, endowed by the Danish data company KMD. The pretraining took approximately 4 days and 9.5 hours for both the cased and uncased model",
"### Fine-tuning\n\n\nTo fine-tune any Ælæctra model follow the fine-tuning notebooks",
"### References\n\n\nClark, K., Luong, M.-T., Le, Q. V., & Manning, C. D. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ArXiv:2003.10555 [Cs]. URL\n\n\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\n\nDevlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv:1810.04805 [Cs]. URL\n\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL\n\n\nStrømberg-Derczynski, L., Baglini, R., Christiansen, M. H., Ciosici, M. R., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2020). The Danish Gigaword Project. ArXiv:2005.03521 [Cs]. URL",
"#### Acknowledgements\n\n\nAs the majority of this repository is build upon the works by the team at Google who created ELECTRA, a HUGE thanks to them is in order.\n\n\nA Giga thanks also goes out to the incredible people who collected The Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020).\n\n\nFurthermore, I would like to thank my supervisor Riccardo Fusaroli for the support with the thesis, and a special thanks goes out to Kenneth Enevoldsen for his continuous feedback.\n\n\nLastly, i would like to thank KMD, my colleagues from KMD, and my peers and co-students from Cognitive Science for encouriging me to keep on working hard and holding my head up high!",
"#### Contact\n\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />](URL)\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />](URL)"
] |
token-classification | transformers |
# Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO) finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
Humongous amounts of credit needs to go to [Certainly](https://certainly.io/) (previously known as BotXO), for pretraining the Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face.
It is both available in TensorFlow and Pytorch format.
The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1).
Here is an example on how to load Danish BERT for token classification in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo-ner-dane")
model = AutoModelForTokenClassification.from_pretrained("Maltehb/danish-bert-botxo-ner-dane")
```
### References
Danish BERT. (2020). BotXO. https://github.com/botxo/nordic_bert (Original work published 2019)
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. https://www.aclweb.org/anthology/2020.lrec-1.565
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on [[email protected]](mailto:[email protected]?subject=[GitHub]%20DanishBERTUncasedNER) or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/twitter.svg" />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/linkedin.svg" />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="https://cdn.jsdelivr.net/npm/simple-icons@v3/icons/instagram.svg" />][instagram]
<br />
</details>
[twitter]: https://twitter.com/malteH_B
[instagram]: https://www.instagram.com/maltemusen/
[linkedin]: https://www.linkedin.com/in/malte-h%C3%B8jmark-bertelsen-9a618017b/ | {"language": "da", "license": "cc-by-4.0", "tags": ["danish", "bert", "masked-lm", "botxo"], "datasets": ["common_crawl", "wikipedia", "dindebat.dk", "hestenettet.dk", "danish_OpenSubtitles"], "widget": [{"text": "Chili Jensen, som bor p\u00e5 Danmarksgade 12, k\u00f8ber chilifrugter fra Netto."}]} | Maltehb/danish-bert-botxo-ner-dane | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"danish",
"masked-lm",
"botxo",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"dataset:dindebat.dk",
"dataset:hestenettet.dk",
"dataset:danish_OpenSubtitles",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"da"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #danish #masked-lm #botxo #da #dataset-common_crawl #dataset-wikipedia #dataset-dindebat.dk #dataset-hestenettet.dk #dataset-danish_OpenSubtitles #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Danish BERT (version 2, uncased) by Certainly (previously known as BotXO) finetuned for Named Entity Recognition on the DaNE dataset (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
Humongous amounts of credit needs to go to Certainly (previously known as BotXO), for pretraining the Danish BERT. For data and training details see their GitHub repository or this article. You can also visit their organization page on Hugging Face.
It is both available in TensorFlow and Pytorch format.
The original TensorFlow version can be downloaded using this link.
Here is an example on how to load Danish BERT for token classification in PyTorch using the Transformers library:
### References
Danish BERT. (2020). BotXO. URL (Original work published 2019)
Hvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL
#### Contact
For help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:
[<img align="left" alt="MalteHB | Twitter" width="22px" src="URL />][twitter]
[<img align="left" alt="MalteHB | LinkedIn" width="22px" src="URL />][linkedin]
[<img align="left" alt="MalteHB | Instagram" width="22px" src="URL />][instagram]
<br />
</details>
[twitter]: URL
[instagram]: URL
[linkedin]: URL | [
"# Danish BERT (version 2, uncased) by Certainly (previously known as BotXO) finetuned for Named Entity Recognition on the DaNE dataset (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.\n\nHumongous amounts of credit needs to go to Certainly (previously known as BotXO), for pretraining the Danish BERT. For data and training details see their GitHub repository or this article. You can also visit their organization page on Hugging Face.\n\nIt is both available in TensorFlow and Pytorch format. \nThe original TensorFlow version can be downloaded using this link.\n\nHere is an example on how to load Danish BERT for token classification in PyTorch using the Transformers library:",
"### References\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL",
"#### Contact\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />][twitter]\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />][linkedin]\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />][instagram]\n\n<br />\n\n</details>\n\n[twitter]: URL\n[instagram]: URL\n[linkedin]: URL"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #danish #masked-lm #botxo #da #dataset-common_crawl #dataset-wikipedia #dataset-dindebat.dk #dataset-hestenettet.dk #dataset-danish_OpenSubtitles #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Danish BERT (version 2, uncased) by Certainly (previously known as BotXO) finetuned for Named Entity Recognition on the DaNE dataset (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.\n\nHumongous amounts of credit needs to go to Certainly (previously known as BotXO), for pretraining the Danish BERT. For data and training details see their GitHub repository or this article. You can also visit their organization page on Hugging Face.\n\nIt is both available in TensorFlow and Pytorch format. \nThe original TensorFlow version can be downloaded using this link.\n\nHere is an example on how to load Danish BERT for token classification in PyTorch using the Transformers library:",
"### References\nDanish BERT. (2020). BotXO. URL (Original work published 2019)\n\nHvingelby, R., Pauli, A. B., Barrett, M., Rosted, C., Lidegaard, L. M., & Søgaard, A. (2020). DaNE: A Named Entity Resource for Danish. Proceedings of the 12th Language Resources and Evaluation Conference, 4597–4604. URL",
"#### Contact\n\nFor help or further information feel free to connect with the author Malte Højmark-Bertelsen on hjb@URL or any of the following platforms:\n\n[<img align=\"left\" alt=\"MalteHB | Twitter\" width=\"22px\" src=\"URL />][twitter]\n[<img align=\"left\" alt=\"MalteHB | LinkedIn\" width=\"22px\" src=\"URL />][linkedin]\n[<img align=\"left\" alt=\"MalteHB | Instagram\" width=\"22px\" src=\"URL />][instagram]\n\n<br />\n\n</details>\n\n[twitter]: URL\n[instagram]: URL\n[linkedin]: URL"
] |
fill-mask | transformers |
# Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO).
All credit goes to [Certainly](https://certainly.io/) (previously known as BotXO), who developed Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face.
It is both available in TensorFlow and Pytorch format.
The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1).
Here is an example on how to load Danish BERT in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo")
model = AutoModelForPreTraining.from_pretrained("Maltehb/danish-bert-botxo")
```
| {"language": "da", "license": "cc-by-4.0", "tags": ["danish", "bert", "masked-lm", "Certainly"], "datasets": ["common_crawl", "wikipedia", "dindebat.dk", "hestenettet.dk", "danishOpenSubtitles"], "pipeline_tag": "fill-mask", "widget": [{"text": "K\u00f8benhavn er [MASK] i Danmark."}]} | Maltehb/danish-bert-botxo | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"danish",
"masked-lm",
"Certainly",
"fill-mask",
"da",
"dataset:common_crawl",
"dataset:wikipedia",
"dataset:dindebat.dk",
"dataset:hestenettet.dk",
"dataset:danishOpenSubtitles",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"da"
] | TAGS
#transformers #pytorch #tf #jax #bert #token-classification #danish #masked-lm #Certainly #fill-mask #da #dataset-common_crawl #dataset-wikipedia #dataset-dindebat.dk #dataset-hestenettet.dk #dataset-danishOpenSubtitles #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Danish BERT (version 2, uncased) by Certainly (previously known as BotXO).
All credit goes to Certainly (previously known as BotXO), who developed Danish BERT. For data and training details see their GitHub repository or this article. You can also visit their organization page on Hugging Face.
It is both available in TensorFlow and Pytorch format.
The original TensorFlow version can be downloaded using this link.
Here is an example on how to load Danish BERT in PyTorch using the Transformers library:
| [
"# Danish BERT (version 2, uncased) by Certainly (previously known as BotXO).\n\nAll credit goes to Certainly (previously known as BotXO), who developed Danish BERT. For data and training details see their GitHub repository or this article. You can also visit their organization page on Hugging Face.\n\nIt is both available in TensorFlow and Pytorch format. \n\nThe original TensorFlow version can be downloaded using this link.\n\n\nHere is an example on how to load Danish BERT in PyTorch using the Transformers library:"
] | [
"TAGS\n#transformers #pytorch #tf #jax #bert #token-classification #danish #masked-lm #Certainly #fill-mask #da #dataset-common_crawl #dataset-wikipedia #dataset-dindebat.dk #dataset-hestenettet.dk #dataset-danishOpenSubtitles #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Danish BERT (version 2, uncased) by Certainly (previously known as BotXO).\n\nAll credit goes to Certainly (previously known as BotXO), who developed Danish BERT. For data and training details see their GitHub repository or this article. You can also visit their organization page on Hugging Face.\n\nIt is both available in TensorFlow and Pytorch format. \n\nThe original TensorFlow version can be downloaded using this link.\n\n\nHere is an example on how to load Danish BERT in PyTorch using the Transformers library:"
] |
text-generation | transformers | hello
| {} | Mamatha/agri-gpt2 | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| hello
| [] | [
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation | transformers |
#Mikasa Ackermann DialoGPT Model | {"tags": ["conversational"]} | Mandy/DialoGPT-small-Mikasa | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Mikasa Ackermann DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8433
- Wer: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 1.468 | 166.67 | 500 | 3.0262 | 1.0035 |
| 0.0572 | 333.33 | 1000 | 3.5352 | 0.9721 |
| 0.0209 | 500.0 | 1500 | 3.7266 | 0.9834 |
| 0.0092 | 666.67 | 2000 | 3.8433 | 0.9852 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]} | Maniac/wav2vec2-xls-r-60-urdu | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ur",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ur"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ur #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - UR dataset.
It achieves the following results on the evaluation set:
* Loss: 3.8433
* Wer: 0.9852
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 2000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ur #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5614
- Wer: 0.6765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.9115 | 20.83 | 500 | 1.5400 | 0.7280 |
| 0.1155 | 41.67 | 1000 | 1.5614 | 0.6765 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0 | {"language": ["ur"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "sv", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "ur"}, "metrics": [{"type": "wer", "value": 67.48, "name": "Test WER"}]}]}]} | Maniac/wav2vec2-xls-r-urdu | null | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"sv",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"ur",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ur"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #sv #robust-speech-event #model_for_talk #hf-asr-leaderboard #ur #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - UR dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5614
* Wer: 0.6765
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 1000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #sv #robust-speech-event #model_for_talk #hf-asr-leaderboard #ur #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.