pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text-classification
transformers
Language Detection Model for Nepali, English, Hindi and Spanish Model fine tuned on xlm-roberta-large
{}
Manishl7/xlm-roberta-large-language-detection
null
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
Language Detection Model for Nepali, English, Hindi and Spanish Model fine tuned on xlm-roberta-large
[]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Manthan/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
null
null
return im def main(): st.title("Lowlight Enhancement") st.write("This is a simple lowlight enhancement app with great performance and does not require paired images to train.") st.write("The model runs at 1000/11 FPS on single GPU/CPU on images with a size of 1200*900*3") uploaded_file = st.file_uploader("Lowlight Image") if uploaded_file: data_lowlight = Image.open(uploaded_file) col1, col2 = st.columns(2) col1.write("Original (Lowlight)") col1.image(data_lowlight, caption="Lowlight Image", use_column_width=True)
{}
Manyman3231/lowlight-enhancement
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
return im def main(): URL("Lowlight Enhancement") URL("This is a simple lowlight enhancement app with great performance and does not require paired images to train.") URL("The model runs at 1000/11 FPS on single GPU/CPU on images with a size of 1200*900*3") uploaded_file = st.file_uploader("Lowlight Image") if uploaded_file: data_lowlight = URL(uploaded_file) col1, col2 = st.columns(2) URL("Original (Lowlight)") URL(data_lowlight, caption="Lowlight Image", use_column_width=True)
[]
[ "TAGS\n#region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6936 | 0.54 | 500 | 1.4844 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["samsum"], "model-index": [{"name": "pegasus-samsum", "results": []}]}
Mapcar/pegasus-samsum
null
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #dataset-samsum #autotrain_compatible #endpoints_compatible #has_space #region-us
pegasus-samsum ============== This model is a fine-tuned version of google/pegasus-cnn\_dailymail on the samsum dataset. It achieves the following results on the evaluation set: * Loss: 1.4844 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #pegasus #text2text-generation #generated_from_trainer #dataset-samsum #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Mara/DialoGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text2text-generation
transformers
# Pegasus XSUM Gigaword ## Model description Pegasus XSUM model finetuned to Gigaword Summarization task, significantly better performance than pegasus gigaword, but still doesn't match model paper performance. ## Intended uses & limitations Produces short summaries with the coherence of the XSUM Model #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Still has all the biases of any of the abstractive models, but seems a little less prone to hallucination. ## Training data Initialized with pegasus-XSUM ## Training procedure Trained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters) ## Eval results Evaluated on Gigaword test set (from hugging face using the default parameters) run_summarization.py --model_name_or_path pegasus-xsum/checkpoint-11500/ --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate | Metric | Score | | ----------- | ----------- | | eval_rouge1 | 34.1958 | | eval_rouge2 | 15.4033 | | eval_rougeL | 31.4488 | run_summarization.py --model_name_or_path google/pegasus-gigaword --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate | Metric | Score | | ----------- | ----------- | | eval_rouge1 | 20.8111 | | eval_rouge2 | 8.766 | | eval_rougeL | 18.4431 | ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
{"language": ["English"], "tags": [], "datasets": ["XSUM", "Gigaword"], "metrics": ["Rouge"]}
Marc/pegasus_xsum_gigaword
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "dataset:XSUM", "dataset:Gigaword", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "English" ]
TAGS #transformers #pytorch #pegasus #text2text-generation #dataset-XSUM #dataset-Gigaword #autotrain_compatible #endpoints_compatible #region-us
Pegasus XSUM Gigaword ===================== Model description ----------------- Pegasus XSUM model finetuned to Gigaword Summarization task, significantly better performance than pegasus gigaword, but still doesn't match model paper performance. Intended uses & limitations --------------------------- Produces short summaries with the coherence of the XSUM Model #### How to use #### Limitations and bias Still has all the biases of any of the abstractive models, but seems a little less prone to hallucination. Training data ------------- Initialized with pegasus-XSUM Training procedure ------------------ Trained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters) Eval results ------------ Evaluated on Gigaword test set (from hugging face using the default parameters) run\_summarization.py --model\_name\_or\_path pegasus-xsum/checkpoint-11500/ --do\_predict --dataset\_name gigaword --dataset\_config "3.0.0" --source\_prefix "summarize: " --output\_dir pegasus-xsum --per\_device\_train\_batch\_size=8 --per\_device\_eval\_batch\_size=8 --overwrite\_output\_dir --predict\_with\_generate run\_summarization.py --model\_name\_or\_path google/pegasus-gigaword --do\_predict --dataset\_name gigaword --dataset\_config "3.0.0" --source\_prefix "summarize: " --output\_dir pegasus-xsum --per\_device\_train\_batch\_size=8 --per\_device\_eval\_batch\_size=8 --overwrite\_output\_dir --predict\_with\_generate ### BibTeX entry and citation info
[ "#### How to use", "#### Limitations and bias\n\n\nStill has all the biases of any of the abstractive models, but seems a little less prone to hallucination.\n\n\nTraining data\n-------------\n\n\nInitialized with pegasus-XSUM\n\n\nTraining procedure\n------------------\n\n\nTrained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters)\n\n\nEval results\n------------\n\n\nEvaluated on Gigaword test set (from hugging face using the default parameters)\nrun\\_summarization.py --model\\_name\\_or\\_path pegasus-xsum/checkpoint-11500/ --do\\_predict --dataset\\_name gigaword --dataset\\_config \"3.0.0\" --source\\_prefix \"summarize: \" --output\\_dir pegasus-xsum --per\\_device\\_train\\_batch\\_size=8 --per\\_device\\_eval\\_batch\\_size=8 --overwrite\\_output\\_dir --predict\\_with\\_generate\n\n\n\nrun\\_summarization.py --model\\_name\\_or\\_path google/pegasus-gigaword --do\\_predict --dataset\\_name gigaword --dataset\\_config \"3.0.0\" --source\\_prefix \"summarize: \" --output\\_dir pegasus-xsum --per\\_device\\_train\\_batch\\_size=8 --per\\_device\\_eval\\_batch\\_size=8 --overwrite\\_output\\_dir --predict\\_with\\_generate", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #dataset-XSUM #dataset-Gigaword #autotrain_compatible #endpoints_compatible #region-us \n", "#### How to use", "#### Limitations and bias\n\n\nStill has all the biases of any of the abstractive models, but seems a little less prone to hallucination.\n\n\nTraining data\n-------------\n\n\nInitialized with pegasus-XSUM\n\n\nTraining procedure\n------------------\n\n\nTrained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters)\n\n\nEval results\n------------\n\n\nEvaluated on Gigaword test set (from hugging face using the default parameters)\nrun\\_summarization.py --model\\_name\\_or\\_path pegasus-xsum/checkpoint-11500/ --do\\_predict --dataset\\_name gigaword --dataset\\_config \"3.0.0\" --source\\_prefix \"summarize: \" --output\\_dir pegasus-xsum --per\\_device\\_train\\_batch\\_size=8 --per\\_device\\_eval\\_batch\\_size=8 --overwrite\\_output\\_dir --predict\\_with\\_generate\n\n\n\nrun\\_summarization.py --model\\_name\\_or\\_path google/pegasus-gigaword --do\\_predict --dataset\\_name gigaword --dataset\\_config \"3.0.0\" --source\\_prefix \"summarize: \" --output\\_dir pegasus-xsum --per\\_device\\_train\\_batch\\_size=8 --per\\_device\\_eval\\_batch\\_size=8 --overwrite\\_output\\_dir --predict\\_with\\_generate", "### BibTeX entry and citation info" ]
question-answering
transformers
# ixambert-base-cased finetuned for QA This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1 and an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions in English, Spanish and Basque. ## Overview * **Language model:** ixambert-base-cased * **Languages:** English, Spanish and Basque * **Downstream task:** Extractive QA * **Training data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque * **Eval data:** SQuAD v1.1 + experimental SQuAD1.1 in Basque * **Infrastructure:** 1x GeForce RTX 2080 ## Outputs The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example: ```python {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'} ``` ## How to use ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "MarcBrun/ixambert-finetuned-squad-eu-en" # To get predictions context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820" question = "When was Florence Nightingale born?" qa = pipeline("question-answering", model=model_name, tokenizer=model_name) pred = qa(question=question,context=context) # To load the model and tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Hyperparameters ``` batch_size = 8 n_epochs = 3 learning_rate = 2e-5 optimizer = AdamW lr_schedule = linear max_seq_len = 384 doc_stride = 128 ```
{"language": ["en", "es", "eu"], "datasets": ["squad"], "widget": [{"text": "When was Florence Nightingale born?", "context": "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820.", "example_title": "English"}, {"text": "\u00bfPor qu\u00e9 provincias pasa el Tajo?", "context": "El Tajo es el r\u00edo m\u00e1s largo de la pen\u00ednsula ib\u00e9rica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinaci\u00f3n hacia el suroeste, que se acent\u00faa cuando llega a Portugal, donde recibe el nombre de Tejo.\nNace en los montes Universales, en la sierra de Albarrac\u00edn, sobre la rama occidental del sistema Ib\u00e9rico y, despu\u00e9s de recorrer 1007 km, llega al oc\u00e9ano Atl\u00e1ntico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m\u00b3/s. En sus primeros 816 km atraviesa Espa\u00f1a, donde discurre por cuatro comunidades aut\u00f3nomas (Arag\u00f3n, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y C\u00e1ceres).", "example_title": "Espa\u00f1ol"}, {"text": "Zer beste izenak ditu Tartalo?", "context": "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote.", "example_title": "Euskara"}]}
MarcBrun/ixambert-finetuned-squad-eu-en
null
[ "transformers", "pytorch", "bert", "question-answering", "en", "es", "eu", "dataset:squad", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en", "es", "eu" ]
TAGS #transformers #pytorch #bert #question-answering #en #es #eu #dataset-squad #endpoints_compatible #has_space #region-us
# ixambert-base-cased finetuned for QA This is a basic implementation of the multilingual model "ixambert-base-cased", fine-tuned on SQuAD v1.1 and an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions in English, Spanish and Basque. ## Overview * Language model: ixambert-base-cased * Languages: English, Spanish and Basque * Downstream task: Extractive QA * Training data: SQuAD v1.1 + experimental SQuAD1.1 in Basque * Eval data: SQuAD v1.1 + experimental SQuAD1.1 in Basque * Infrastructure: 1x GeForce RTX 2080 ## Outputs The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example: ## How to use ## Hyperparameters
[ "# ixambert-base-cased finetuned for QA\n\nThis is a basic implementation of the multilingual model \"ixambert-base-cased\", fine-tuned on SQuAD v1.1 and an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions in English, Spanish and Basque.", "## Overview\n\n* Language model: ixambert-base-cased\n* Languages: English, Spanish and Basque\n* Downstream task: Extractive QA\n* Training data: SQuAD v1.1 + experimental SQuAD1.1 in Basque\n* Eval data: SQuAD v1.1 + experimental SQuAD1.1 in Basque\n* Infrastructure: 1x GeForce RTX 2080", "## Outputs\n\nThe model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:", "## How to use", "## Hyperparameters" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #en #es #eu #dataset-squad #endpoints_compatible #has_space #region-us \n", "# ixambert-base-cased finetuned for QA\n\nThis is a basic implementation of the multilingual model \"ixambert-base-cased\", fine-tuned on SQuAD v1.1 and an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions in English, Spanish and Basque.", "## Overview\n\n* Language model: ixambert-base-cased\n* Languages: English, Spanish and Basque\n* Downstream task: Extractive QA\n* Training data: SQuAD v1.1 + experimental SQuAD1.1 in Basque\n* Eval data: SQuAD v1.1 + experimental SQuAD1.1 in Basque\n* Infrastructure: 1x GeForce RTX 2080", "## Outputs\n\nThe model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:", "## How to use", "## Hyperparameters" ]
question-answering
transformers
# ixambert-base-cased finetuned for QA This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions. ## Overview * **Language model:** ixambert-base-cased * **Languages:** English, Spanish and Basque * **Downstream task:** Extractive QA * **Training data:** Experimental SQuAD1.1 in Basque * **Eval data:** Experimental SQuAD1.1 in Basque * **Infrastructure:** 1x GeForce RTX 2080 ## Outputs The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example: ```python {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'} ``` ## How to use ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "MarcBrun/ixambert-finetuned-squad-eu" # To get predictions context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820" question = "When was Florence Nightingale born?" qa = pipeline("question-answering", model=model_name, tokenizer=model_name) pred = qa(question=question,context=context) # To load the model and tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Hyperparameters ``` batch_size = 8 n_epochs = 3 learning_rate = 2e-5 optimizer = AdamW lr_schedule = linear max_seq_len = 384 doc_stride = 128 ```
{"language": ["en", "es", "eu"], "widget": [{"text": "When was Florence Nightingale born?", "context": "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820.", "example_title": "English"}, {"text": "\u00bfPor qu\u00e9 provincias pasa el Tajo?", "context": "El Tajo es el r\u00edo m\u00e1s largo de la pen\u00ednsula ib\u00e9rica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinaci\u00f3n hacia el suroeste, que se acent\u00faa cuando llega a Portugal, donde recibe el nombre de Tejo.\nNace en los montes Universales, en la sierra de Albarrac\u00edn, sobre la rama occidental del sistema Ib\u00e9rico y, despu\u00e9s de recorrer 1007 km, llega al oc\u00e9ano Atl\u00e1ntico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m\u00b3/s. En sus primeros 816 km atraviesa Espa\u00f1a, donde discurre por cuatro comunidades aut\u00f3nomas (Arag\u00f3n, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y C\u00e1ceres).", "example_title": "Espa\u00f1ol"}, {"text": "Zer beste izenak ditu Tartalo?", "context": "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote.", "example_title": "Euskara"}]}
MarcBrun/ixambert-finetuned-squad-eu
null
[ "transformers", "pytorch", "bert", "question-answering", "en", "es", "eu", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en", "es", "eu" ]
TAGS #transformers #pytorch #bert #question-answering #en #es #eu #endpoints_compatible #has_space #region-us
# ixambert-base-cased finetuned for QA This is a basic implementation of the multilingual model "ixambert-base-cased", fine-tuned on an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions. ## Overview * Language model: ixambert-base-cased * Languages: English, Spanish and Basque * Downstream task: Extractive QA * Training data: Experimental SQuAD1.1 in Basque * Eval data: Experimental SQuAD1.1 in Basque * Infrastructure: 1x GeForce RTX 2080 ## Outputs The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example: ## How to use ## Hyperparameters
[ "# ixambert-base-cased finetuned for QA\n\nThis is a basic implementation of the multilingual model \"ixambert-base-cased\", fine-tuned on an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions.", "## Overview\n\n* Language model: ixambert-base-cased\n* Languages: English, Spanish and Basque\n* Downstream task: Extractive QA\n* Training data: Experimental SQuAD1.1 in Basque\n* Eval data: Experimental SQuAD1.1 in Basque\n* Infrastructure: 1x GeForce RTX 2080", "## Outputs\n\nThe model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:", "## How to use", "## Hyperparameters" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #en #es #eu #endpoints_compatible #has_space #region-us \n", "# ixambert-base-cased finetuned for QA\n\nThis is a basic implementation of the multilingual model \"ixambert-base-cased\", fine-tuned on an experimental version of SQuAD1.1 in Basque (1/3 size of original SQuAD1.1), that is able to answer basic factual questions.", "## Overview\n\n* Language model: ixambert-base-cased\n* Languages: English, Spanish and Basque\n* Downstream task: Extractive QA\n* Training data: Experimental SQuAD1.1 in Basque\n* Eval data: Experimental SQuAD1.1 in Basque\n* Infrastructure: 1x GeForce RTX 2080", "## Outputs\n\nThe model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:", "## How to use", "## Hyperparameters" ]
question-answering
transformers
# ixambert-base-cased finetuned for QA This is a basic implementation of the multilingual model ["ixambert-base-cased"](https://huggingface.co/ixa-ehu/ixambert-base-cased), fine-tuned on SQuAD v1.1, that is able to answer basic factual questions in English, Spanish and Basque. ## Overview * **Language model:** ixambert-base-cased * **Languages:** English, Spanish and Basque * **Downstream task:** Extractive QA * **Training data:** SQuAD v1.1 * **Eval data:** SQuAD v1.1 * **Infrastructure:** 1x GeForce RTX 2080 ## Outputs The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example: ```python {'score': 0.9667195081710815, 'start': 101, 'end': 105, 'answer': '1820'} ``` ## How to use ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "MarcBrun/ixambert-finetuned-squad" # To get predictions context = "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820" question = "When was Florence Nightingale born?" qa = pipeline("question-answering", model=model_name, tokenizer=model_name) pred = qa(question=question,context=context) # To load the model and tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Hyperparameters ``` batch_size = 8 n_epochs = 3 learning_rate = 2e-5 optimizer = AdamW lr_schedule = linear max_seq_len = 384 doc_stride = 128 ```
{"language": ["en", "es", "eu"], "datasets": ["squad"], "widget": [{"text": "When was Florence Nightingale born?", "context": "Florence Nightingale, known for being the founder of modern nursing, was born in Florence, Italy, in 1820.", "example_title": "English"}, {"text": "\u00bfPor qu\u00e9 provincias pasa el Tajo?", "context": "El Tajo es el r\u00edo m\u00e1s largo de la pen\u00ednsula ib\u00e9rica, a la que atraviesa en su parte central, siguiendo un rumbo este-oeste, con una leve inclinaci\u00f3n hacia el suroeste, que se acent\u00faa cuando llega a Portugal, donde recibe el nombre de Tejo.\nNace en los montes Universales, en la sierra de Albarrac\u00edn, sobre la rama occidental del sistema Ib\u00e9rico y, despu\u00e9s de recorrer 1007 km, llega al oc\u00e9ano Atl\u00e1ntico en la ciudad de Lisboa. En su desembocadura forma el estuario del mar de la Paja, en el que vierte un caudal medio de 456 m\u00b3/s. En sus primeros 816 km atraviesa Espa\u00f1a, donde discurre por cuatro comunidades aut\u00f3nomas (Arag\u00f3n, Castilla-La Mancha, Madrid y Extremadura) y un total de seis provincias (Teruel, Guadalajara, Cuenca, Madrid, Toledo y C\u00e1ceres).", "example_title": "Espa\u00f1ol"}, {"text": "Zer beste izenak ditu Tartalo?", "context": "Tartalo euskal mitologiako izaki begibakar artzain erraldoia da. Tartalo izena zenbait euskal hizkeratan herskari-bustidurarekin ahoskatu ohi denez, horrelaxe ere idazten da batzuetan: Ttarttalo. Euskal Herriko zenbait tokitan, Torto edo Anxo ere esaten diote.", "example_title": "Euskara"}]}
MarcBrun/ixambert-finetuned-squad
null
[ "transformers", "pytorch", "bert", "question-answering", "en", "es", "eu", "dataset:squad", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en", "es", "eu" ]
TAGS #transformers #pytorch #bert #question-answering #en #es #eu #dataset-squad #endpoints_compatible #has_space #region-us
# ixambert-base-cased finetuned for QA This is a basic implementation of the multilingual model "ixambert-base-cased", fine-tuned on SQuAD v1.1, that is able to answer basic factual questions in English, Spanish and Basque. ## Overview * Language model: ixambert-base-cased * Languages: English, Spanish and Basque * Downstream task: Extractive QA * Training data: SQuAD v1.1 * Eval data: SQuAD v1.1 * Infrastructure: 1x GeForce RTX 2080 ## Outputs The model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example: ## How to use ## Hyperparameters
[ "# ixambert-base-cased finetuned for QA\n\nThis is a basic implementation of the multilingual model \"ixambert-base-cased\", fine-tuned on SQuAD v1.1, that is able to answer basic factual questions in English, Spanish and Basque.", "## Overview\n\n* Language model: ixambert-base-cased\n* Languages: English, Spanish and Basque\n* Downstream task: Extractive QA\n* Training data: SQuAD v1.1\n* Eval data: SQuAD v1.1\n* Infrastructure: 1x GeForce RTX 2080", "## Outputs\n\nThe model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:", "## How to use", "## Hyperparameters" ]
[ "TAGS\n#transformers #pytorch #bert #question-answering #en #es #eu #dataset-squad #endpoints_compatible #has_space #region-us \n", "# ixambert-base-cased finetuned for QA\n\nThis is a basic implementation of the multilingual model \"ixambert-base-cased\", fine-tuned on SQuAD v1.1, that is able to answer basic factual questions in English, Spanish and Basque.", "## Overview\n\n* Language model: ixambert-base-cased\n* Languages: English, Spanish and Basque\n* Downstream task: Extractive QA\n* Training data: SQuAD v1.1\n* Eval data: SQuAD v1.1\n* Infrastructure: 1x GeForce RTX 2080", "## Outputs\n\nThe model outputs the answer to the question, the start and end positions of the answer in the original context, and a score for the probability for that span of text to be the correct answer. For example:", "## How to use", "## Hyperparameters" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-legal_data This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.9101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 26 | 5.3529 | | No log | 2.0 | 52 | 5.4226 | | No log | 3.0 | 78 | 5.2550 | | No log | 4.0 | 104 | 5.1011 | | No log | 5.0 | 130 | 5.1857 | | No log | 6.0 | 156 | 5.5119 | | No log | 7.0 | 182 | 5.4480 | | No log | 8.0 | 208 | 5.6993 | | No log | 9.0 | 234 | 5.9614 | | No log | 10.0 | 260 | 5.6987 | | No log | 11.0 | 286 | 5.6679 | | No log | 12.0 | 312 | 5.9850 | | No log | 13.0 | 338 | 5.6065 | | No log | 14.0 | 364 | 5.3162 | | No log | 15.0 | 390 | 5.7856 | | No log | 16.0 | 416 | 5.5786 | | No log | 17.0 | 442 | 5.6028 | | No log | 18.0 | 468 | 5.7649 | | No log | 19.0 | 494 | 5.5382 | | 1.8345 | 20.0 | 520 | 6.3654 | | 1.8345 | 21.0 | 546 | 5.3575 | | 1.8345 | 22.0 | 572 | 5.3808 | | 1.8345 | 23.0 | 598 | 5.9340 | | 1.8345 | 24.0 | 624 | 6.1475 | | 1.8345 | 25.0 | 650 | 6.2188 | | 1.8345 | 26.0 | 676 | 5.7651 | | 1.8345 | 27.0 | 702 | 6.2629 | | 1.8345 | 28.0 | 728 | 6.1356 | | 1.8345 | 29.0 | 754 | 5.9255 | | 1.8345 | 30.0 | 780 | 6.4252 | | 1.8345 | 31.0 | 806 | 5.6967 | | 1.8345 | 32.0 | 832 | 6.4324 | | 1.8345 | 33.0 | 858 | 6.5087 | | 1.8345 | 34.0 | 884 | 6.1113 | | 1.8345 | 35.0 | 910 | 6.7443 | | 1.8345 | 36.0 | 936 | 6.6970 | | 1.8345 | 37.0 | 962 | 6.5578 | | 1.8345 | 38.0 | 988 | 6.1963 | | 0.2251 | 39.0 | 1014 | 6.4893 | | 0.2251 | 40.0 | 1040 | 6.6347 | | 0.2251 | 41.0 | 1066 | 6.7106 | | 0.2251 | 42.0 | 1092 | 6.8129 | | 0.2251 | 43.0 | 1118 | 6.6386 | | 0.2251 | 44.0 | 1144 | 6.4134 | | 0.2251 | 45.0 | 1170 | 6.6883 | | 0.2251 | 46.0 | 1196 | 6.6406 | | 0.2251 | 47.0 | 1222 | 6.3065 | | 0.2251 | 48.0 | 1248 | 7.0281 | | 0.2251 | 49.0 | 1274 | 7.3646 | | 0.2251 | 50.0 | 1300 | 7.1086 | | 0.2251 | 51.0 | 1326 | 6.4749 | | 0.2251 | 52.0 | 1352 | 6.3303 | | 0.2251 | 53.0 | 1378 | 6.2919 | | 0.2251 | 54.0 | 1404 | 6.3855 | | 0.2251 | 55.0 | 1430 | 6.9501 | | 0.2251 | 56.0 | 1456 | 6.8714 | | 0.2251 | 57.0 | 1482 | 6.9856 | | 0.0891 | 58.0 | 1508 | 6.9910 | | 0.0891 | 59.0 | 1534 | 6.9293 | | 0.0891 | 60.0 | 1560 | 7.3493 | | 0.0891 | 61.0 | 1586 | 7.1834 | | 0.0891 | 62.0 | 1612 | 7.0479 | | 0.0891 | 63.0 | 1638 | 6.7674 | | 0.0891 | 64.0 | 1664 | 6.7553 | | 0.0891 | 65.0 | 1690 | 7.3074 | | 0.0891 | 66.0 | 1716 | 6.8071 | | 0.0891 | 67.0 | 1742 | 7.6622 | | 0.0891 | 68.0 | 1768 | 6.9555 | | 0.0891 | 69.0 | 1794 | 7.0153 | | 0.0891 | 70.0 | 1820 | 7.2085 | | 0.0891 | 71.0 | 1846 | 6.7582 | | 0.0891 | 72.0 | 1872 | 6.7989 | | 0.0891 | 73.0 | 1898 | 6.7012 | | 0.0891 | 74.0 | 1924 | 7.0088 | | 0.0891 | 75.0 | 1950 | 7.1024 | | 0.0891 | 76.0 | 1976 | 6.6968 | | 0.058 | 77.0 | 2002 | 7.5249 | | 0.058 | 78.0 | 2028 | 6.9199 | | 0.058 | 79.0 | 2054 | 7.1995 | | 0.058 | 80.0 | 2080 | 6.9349 | | 0.058 | 81.0 | 2106 | 7.4025 | | 0.058 | 82.0 | 2132 | 7.4199 | | 0.058 | 83.0 | 2158 | 6.8081 | | 0.058 | 84.0 | 2184 | 7.4777 | | 0.058 | 85.0 | 2210 | 7.1990 | | 0.058 | 86.0 | 2236 | 7.0062 | | 0.058 | 87.0 | 2262 | 7.5724 | | 0.058 | 88.0 | 2288 | 6.9362 | | 0.058 | 89.0 | 2314 | 7.1368 | | 0.058 | 90.0 | 2340 | 7.2183 | | 0.058 | 91.0 | 2366 | 6.8684 | | 0.058 | 92.0 | 2392 | 7.1433 | | 0.058 | 93.0 | 2418 | 7.2161 | | 0.058 | 94.0 | 2444 | 7.1442 | | 0.058 | 95.0 | 2470 | 7.3098 | | 0.058 | 96.0 | 2496 | 7.1264 | | 0.0512 | 97.0 | 2522 | 6.9424 | | 0.0512 | 98.0 | 2548 | 6.9155 | | 0.0512 | 99.0 | 2574 | 6.9038 | | 0.0512 | 100.0 | 2600 | 6.9101 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-legal_data", "results": []}]}
MariamD/distilbert-base-uncased-finetuned-legal_data
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-legal\_data ============================================= This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 6.9101 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 100 ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu102 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-model-english This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1408 - Train Sparse Categorical Accuracy: 0.9512 - Validation Loss: nan - Validation Sparse Categorical Accuracy: 0.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.2775 | 0.8887 | nan | 0.0 | 0 | | 0.1702 | 0.9390 | nan | 0.0 | 1 | | 0.1300 | 0.9555 | nan | 0.0 | 2 | | 0.1346 | 0.9544 | nan | 0.0 | 3 | | 0.1408 | 0.9512 | nan | 0.0 | 4 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "bert-model-english", "results": []}]}
MarioPenguin/bert-model-english
null
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-model-english ================== This model is a fine-tuned version of bert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.1408 * Train Sparse Categorical Accuracy: 0.9512 * Validation Loss: nan * Validation Sparse Categorical Accuracy: 0.0 * Epoch: 4 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.16.2 * TensorFlow 2.7.0 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-model-english1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0274 - Train Accuracy: 0.9914 - Validation Loss: 0.3493 - Validation Accuracy: 0.9303 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0366 | 0.9885 | 0.3013 | 0.9299 | 0 | | 0.0261 | 0.9912 | 0.3445 | 0.9351 | 1 | | 0.0274 | 0.9914 | 0.3493 | 0.9303 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "bert-model-english1", "results": []}]}
MarioPenguin/bert-model-english1
null
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-model-english1 =================== This model is a fine-tuned version of bert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0274 * Train Accuracy: 0.9914 * Validation Loss: 0.3493 * Validation Accuracy: 0.9303 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.16.2 * TensorFlow 2.7.0 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # beto_amazon_posneu This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1277 - Train Accuracy: 0.9550 - Validation Loss: 0.3439 - Validation Accuracy: 0.8905 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.3195 | 0.8712 | 0.3454 | 0.8580 | 0 | | 0.1774 | 0.9358 | 0.3258 | 0.8802 | 1 | | 0.1277 | 0.9550 | 0.3439 | 0.8905 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_keras_callback"], "model-index": [{"name": "beto_amazon_posneu", "results": []}]}
MarioPenguin/beto_amazon_posneu
null
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #bert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
beto\_amazon\_posneu ==================== This model is a fine-tuned version of dccuchile/bert-base-spanish-wwm-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.1277 * Train Accuracy: 0.9550 * Validation Loss: 0.3439 * Validation Accuracy: 0.8905 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.16.2 * TensorFlow 2.7.0 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-model This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8601 - Accuracy: 0.6117 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 84 | 0.8663 | 0.5914 | | No log | 2.0 | 168 | 0.8601 | 0.6117 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Datasets 1.18.2 - Tokenizers 0.11.0
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "finetuned-model", "results": []}]}
MarioPenguin/finetuned-model
null
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
finetuned-model =============== This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.8601 * Accuracy: 0.6117 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.16.1 * Pytorch 1.10.0+cu111 * Datasets 1.18.2 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-model-english This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1140 - Train Accuracy: 0.9596 - Validation Loss: 0.2166 - Validation Accuracy: 0.9301 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2922 | 0.8804 | 0.2054 | 0.9162 | 0 | | 0.1710 | 0.9352 | 0.1879 | 0.9353 | 1 | | 0.1140 | 0.9596 | 0.2166 | 0.9301 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.7.0 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "roberta-model-english", "results": []}]}
MarioPenguin/roberta-model-english
null
[ "transformers", "tf", "roberta", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #roberta #text-classification #generated_from_keras_callback #license-mit #autotrain_compatible #endpoints_compatible #region-us
roberta-model-english ===================== This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.1140 * Train Accuracy: 0.9596 * Validation Loss: 0.2166 * Validation Accuracy: 0.9301 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': 5e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.16.2 * TensorFlow 2.7.0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #tf #roberta #text-classification #generated_from_keras_callback #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* TensorFlow 2.7.0\n* Tokenizers 0.11.0" ]
null
null
# albertZero albertZero is a PyTorch model with a prediction head fine-tuned for SQuAD 2.0. Based on Hugging Face's albert-base-v2, albertZero employs a novel method to speed up fine-tuning. It re-initializes weights of final linear layer in the shared albert transformer block, resulting in a 2% point improvement during the early epochs of fine-tuning. ## Usage albertZero can be loaded like this: ```python tokenizer = AutoTokenizer.from_pretrained('MarshallHo/albertZero-squad2-base-v2') model = AutoModel.from_pretrained('MarshallHo/albertZero-squad2-base-v2') ``` or ```python from transformers import AlbertModel, AlbertTokenizer, AlbertForQuestionAnswering, AlbertPreTrainedModel mytokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = AlbertForQuestionAnsweringAVPool.from_pretrained('albert-base-v2') model.load_state_dict(torch.load('albertZero-squad2-base-v2.bin')) ``` ## References The goal of [ALBERT](https://arxiv.org/abs/1909.11942) is to reduce the memory requirement of the groundbreaking language model [BERT](https://arxiv.org/abs/1810.04805), while providing a similar level of performance. ALBERT mainly uses 2 methods to reduce the number of parameters – parameter sharing and factorized embedding. The field of NLP has undergone major improvements in recent years. The replacement of recurrent architectures by attention-based models has allowed NLP tasks such as question-answering to approach human level performance. In order to push the limits further, the [SQuAD2.0](https://arxiv.org/abs/1806.03822) dataset was created in 2018 with 50,000 additional unanswerable questions, addressing a major weakness of the original version of the dataset. At the time of writing, near the top of the [SQuAD2.0 leaderboard](https://rajpurkar.github.io/SQuAD-explorer/) is Shanghai Jiao Tong University’s [Retro-Reader](http://arxiv.org/abs/2001.09694). We have re-implemented their non-ensemble ALBERT model with the SQUAD2.0 prediction head. ## Acknowledgments Thanks to the generosity of the team at Hugging Face and all the groups referenced above !
{}
MarshallHo/albertZero-squad2-base-v2
null
[ "arxiv:1909.11942", "arxiv:1810.04805", "arxiv:1806.03822", "arxiv:2001.09694", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1909.11942", "1810.04805", "1806.03822", "2001.09694" ]
[]
TAGS #arxiv-1909.11942 #arxiv-1810.04805 #arxiv-1806.03822 #arxiv-2001.09694 #region-us
# albertZero albertZero is a PyTorch model with a prediction head fine-tuned for SQuAD 2.0. Based on Hugging Face's albert-base-v2, albertZero employs a novel method to speed up fine-tuning. It re-initializes weights of final linear layer in the shared albert transformer block, resulting in a 2% point improvement during the early epochs of fine-tuning. ## Usage albertZero can be loaded like this: or ## References The goal of ALBERT is to reduce the memory requirement of the groundbreaking language model BERT, while providing a similar level of performance. ALBERT mainly uses 2 methods to reduce the number of parameters – parameter sharing and factorized embedding. The field of NLP has undergone major improvements in recent years. The replacement of recurrent architectures by attention-based models has allowed NLP tasks such as question-answering to approach human level performance. In order to push the limits further, the SQuAD2.0 dataset was created in 2018 with 50,000 additional unanswerable questions, addressing a major weakness of the original version of the dataset. At the time of writing, near the top of the SQuAD2.0 leaderboard is Shanghai Jiao Tong University’s Retro-Reader. We have re-implemented their non-ensemble ALBERT model with the SQUAD2.0 prediction head. ## Acknowledgments Thanks to the generosity of the team at Hugging Face and all the groups referenced above !
[ "# albertZero\n\nalbertZero is a PyTorch model with a prediction head fine-tuned for SQuAD 2.0. \n\nBased on Hugging Face's albert-base-v2, albertZero employs a novel method to speed up fine-tuning. It re-initializes weights of final linear layer in the shared albert transformer block, resulting in a 2% point improvement during the early epochs of fine-tuning.", "## Usage\n\nalbertZero can be loaded like this:\n\n\n\nor", "## References\n\nThe goal of ALBERT is to reduce the memory requirement of the groundbreaking\nlanguage model BERT, while providing a similar level of performance. ALBERT mainly uses 2 methods to reduce the number of parameters – parameter sharing and factorized embedding. \n\nThe field of NLP has undergone major improvements in recent years. The\nreplacement of recurrent architectures by attention-based models has allowed NLP tasks such as\nquestion-answering to approach human level performance. In order to push the limits further, the\nSQuAD2.0 dataset was created in 2018 with 50,000 additional unanswerable questions, addressing a major weakness of the original version of the dataset.\n\nAt the time of writing, near the top of the SQuAD2.0 leaderboard is Shanghai Jiao Tong University’s Retro-Reader.\nWe have re-implemented their non-ensemble ALBERT model with the SQUAD2.0 prediction head.", "## Acknowledgments\n\nThanks to the generosity of the team at Hugging Face and all the groups referenced above !" ]
[ "TAGS\n#arxiv-1909.11942 #arxiv-1810.04805 #arxiv-1806.03822 #arxiv-2001.09694 #region-us \n", "# albertZero\n\nalbertZero is a PyTorch model with a prediction head fine-tuned for SQuAD 2.0. \n\nBased on Hugging Face's albert-base-v2, albertZero employs a novel method to speed up fine-tuning. It re-initializes weights of final linear layer in the shared albert transformer block, resulting in a 2% point improvement during the early epochs of fine-tuning.", "## Usage\n\nalbertZero can be loaded like this:\n\n\n\nor", "## References\n\nThe goal of ALBERT is to reduce the memory requirement of the groundbreaking\nlanguage model BERT, while providing a similar level of performance. ALBERT mainly uses 2 methods to reduce the number of parameters – parameter sharing and factorized embedding. \n\nThe field of NLP has undergone major improvements in recent years. The\nreplacement of recurrent architectures by attention-based models has allowed NLP tasks such as\nquestion-answering to approach human level performance. In order to push the limits further, the\nSQuAD2.0 dataset was created in 2018 with 50,000 additional unanswerable questions, addressing a major weakness of the original version of the dataset.\n\nAt the time of writing, near the top of the SQuAD2.0 leaderboard is Shanghai Jiao Tong University’s Retro-Reader.\nWe have re-implemented their non-ensemble ALBERT model with the SQUAD2.0 prediction head.", "## Acknowledgments\n\nThanks to the generosity of the team at Hugging Face and all the groups referenced above !" ]
text-generation
transformers
# Neo-GPT-Title-Generation-Electric-Car Title generator based on Neo-GPT 125M fine-tuned on a dataset of 39k url's title. All urls are selected on the TOP 10 google on a list of Keywords about "Electric car" - "Electric car for sale". # Pipeline example ```python import pandas as pd from transformers import AutoModelForMaskedLM from transformers import GPT2Tokenizer, TrainingArguments, AutoModelForCausalLM, AutoConfig model = AutoModelForCausalLM.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car') tokenizer = GPT2Tokenizer.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car', bos_token='<|startoftext|>', eos_token='<|endoftext|>', pad_token='<|pad|>') prompt = "<|startoftext|> Electric car" input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate(input_ids, do_sample=True, top_k=100, min_length = 30, max_length=150, top_p=0.90, num_return_sequences=20, skip_special_tokens=True) list_title_gen = [] for i, sample_output in enumerate(gen_tokens): title = tokenizer.decode(sample_output, skip_special_tokens=True) list_title_gen.append(title) for i in list_title_gen: try: list_title_gen[list_title_gen.index(i)] = i.split(' | ')[0] except: continue try: list_title_gen[list_title_gen.index(i)] = i.split(' - ')[0] except: continue try: list_title_gen[list_title_gen.index(i)] = i.split(' — ')[0] except: continue list_title_gen = [sub.replace('�', ' ').replace('\\r',' ').replace('\ ',' ').replace('\\t', ' ').replace('\\xa0', '') for sub in list_title_gen] list_title_gen = [sub if sub != '<|startoftext|> Electric car' else '' for sub in list_title_gen] for i in list_title_gen: print(i) ``` # Todo - Improve the quality of the training sample - Add more data
{"language": ["en"], "widget": [{"text": "Tesla range"}, {"text": "Nissan Leaf is"}, {"text": "Tesla is"}, {"text": "The best electric car"}]}
Martian/Neo-GPT-Title-Generation-Electric-Car
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gpt_neo #text-generation #en #autotrain_compatible #endpoints_compatible #region-us
# Neo-GPT-Title-Generation-Electric-Car Title generator based on Neo-GPT 125M fine-tuned on a dataset of 39k url's title. All urls are selected on the TOP 10 google on a list of Keywords about "Electric car" - "Electric car for sale". # Pipeline example # Todo - Improve the quality of the training sample - Add more data
[ "# Neo-GPT-Title-Generation-Electric-Car\n\nTitle generator based on Neo-GPT 125M fine-tuned on a dataset of 39k url's title. All urls are selected on the TOP 10 google on a list of Keywords about \"Electric car\" - \"Electric car for sale\".", "# Pipeline example", "# Todo\n- Improve the quality of the training sample\n- Add more data" ]
[ "TAGS\n#transformers #pytorch #gpt_neo #text-generation #en #autotrain_compatible #endpoints_compatible #region-us \n", "# Neo-GPT-Title-Generation-Electric-Car\n\nTitle generator based on Neo-GPT 125M fine-tuned on a dataset of 39k url's title. All urls are selected on the TOP 10 google on a list of Keywords about \"Electric car\" - \"Electric car for sale\".", "# Pipeline example", "# Todo\n- Improve the quality of the training sample\n- Add more data" ]
automatic-speech-recognition
transformers
# wav2vec2-large-xlsr-53-breton The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor lang = "br" test_dataset = load_dataset("common_voice", lang, split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("Marxav/wav2vec2-large-xlsr-53-breton") model = Wav2Vec2ForCTC.from_pretrained("Marxav/wav2vec2-large-xlsr-53-breton") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]' # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " batch["sentence"] = re.sub("ʼ", "'", batch["sentence"]) batch["sentence"] = re.sub("’", "'", batch["sentence"]) batch["sentence"] = re.sub('‘', "'", batch["sentence"]) return batch nb_samples = 2 test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:nb_samples], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:nb_samples]) ``` The above code leads to the following prediction for the first two samples: * Prediction: ["neller ket dont a-benn eus netra la vez ser merc'hed evel sich", 'an eil hag egile'] * Reference: ["N'haller ket dont a-benn eus netra pa vezer nec'het evel-se.", 'An eil hag egile.'] The model can be evaluated as follows on the {language} test data of Common Voice. ```python import re import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor lang = 'br' test_dataset = load_dataset("common_voice", lang, split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained('Marxav/wav2vec2-large-xlsr-53-breton') model = Wav2Vec2ForCTC.from_pretrained('Marxav/wav2vec2-large-xlsr-53-breton') model.to("cuda") chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " " batch["sentence"] = re.sub("ʼ", "'", batch["sentence"]) batch["sentence"] = re.sub("’", "'", batch["sentence"]) batch["sentence"] = re.sub('‘', "'", batch["sentence"]) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(remove_special_characters) test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 43.43% ## Training The Common Voice `train`, `validation` datasets were used for training.
{"language": "br", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Breton by Marxav", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice br", "type": "common_voice", "args": "br"}, "metrics": [{"type": "wer", "value": 43.43, "name": "Test WER"}]}]}]}
Marxav/wav2vec2-large-xlsr-53-breton
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "br", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "br" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #br #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-large-xlsr-53-breton The model can be used directly (without a language model) as follows: The above code leads to the following prediction for the first two samples: * Prediction: ["neller ket dont a-benn eus netra la vez ser merc'hed evel sich", 'an eil hag egile'] * Reference: ["N'haller ket dont a-benn eus netra pa vezer nec'het evel-se.", 'An eil hag egile.'] The model can be evaluated as follows on the {language} test data of Common Voice. Test Result: 43.43% ## Training The Common Voice 'train', 'validation' datasets were used for training.
[ "# wav2vec2-large-xlsr-53-breton\nThe model can be used directly (without a language model) as follows:\n\nThe above code leads to the following prediction for the first two samples:\n* Prediction: [\"neller ket dont a-benn eus netra la vez ser merc'hed evel sich\", 'an eil hag egile']\n* Reference: [\"N'haller ket dont a-benn eus netra pa vezer nec'het evel-se.\", 'An eil hag egile.']\n\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\n\nTest Result: 43.43%", "## Training\nThe Common Voice 'train', 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #br #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-large-xlsr-53-breton\nThe model can be used directly (without a language model) as follows:\n\nThe above code leads to the following prediction for the first two samples:\n* Prediction: [\"neller ket dont a-benn eus netra la vez ser merc'hed evel sich\", 'an eil hag egile']\n* Reference: [\"N'haller ket dont a-benn eus netra pa vezer nec'het evel-se.\", 'An eil hag egile.']\n\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\n\nTest Result: 43.43%", "## Training\nThe Common Voice 'train', 'validation' datasets were used for training." ]
text-generation
transformers
# GPT2 - RUS
{"language": "ru", "tags": ["text-generation"]}
Mary222/GPT2_RU_GAME
null
[ "transformers", "pytorch", "gpt2", "text-generation", "ru", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# GPT2 - RUS
[ "# GPT2 - RUS" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# GPT2 - RUS" ]
text-generation
transformers
# GPT2 - RUS
{"language": "ru", "tags": ["text-generation"]}
Mary222/GPT2_standard
null
[ "transformers", "pytorch", "gpt2", "feature-extraction", "text-generation", "ru", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #gpt2 #feature-extraction #text-generation #ru #endpoints_compatible #text-generation-inference #region-us
# GPT2 - RUS
[ "# GPT2 - RUS" ]
[ "TAGS\n#transformers #pytorch #gpt2 #feature-extraction #text-generation #ru #endpoints_compatible #text-generation-inference #region-us \n", "# GPT2 - RUS" ]
text-generation
transformers
# GPT2 - RUS
{"language": "ru", "tags": ["text-generation"]}
Mary222/MADE_AI_Dungeon_model_RUS
null
[ "transformers", "pytorch", "gpt2", "text-generation", "ru", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# GPT2 - RUS
[ "# GPT2 - RUS" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# GPT2 - RUS" ]
text-generation
transformers
# GPT2 - RUS
{"language": "ru", "tags": ["text-generation"]}
Mary222/SBERBANK_RUS
null
[ "transformers", "pytorch", "gpt2", "text-generation", "ru", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# GPT2 - RUS
[ "# GPT2 - RUS" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #ru #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# GPT2 - RUS" ]
text-generation
transformers
# LSTM
{"language": "ru", "license": "apache-2.0", "tags": ["text-generation"], "datasets": ["bookcorpus", "wikipedia"]}
Mary222/made-ai-dungeon
null
[ "transformers", "text-generation", "ru", "dataset:bookcorpus", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #text-generation #ru #dataset-bookcorpus #dataset-wikipedia #license-apache-2.0 #endpoints_compatible #region-us
# LSTM
[ "# LSTM" ]
[ "TAGS\n#transformers #text-generation #ru #dataset-bookcorpus #dataset-wikipedia #license-apache-2.0 #endpoints_compatible #region-us \n", "# LSTM" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ar-en-finetuned-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the opus_wikipedia dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["opus_wikipedia"]}
MaryaAI/opus-mt-ar-en-finetuned-ar-to-en
null
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:opus_wikipedia", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-opus_wikipedia #autotrain_compatible #endpoints_compatible #region-us
# opus-mt-ar-en-finetuned-ar-to-en This model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on the opus_wikipedia dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
[ "# opus-mt-ar-en-finetuned-ar-to-en\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on the opus_wikipedia dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.10.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.11.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-opus_wikipedia #autotrain_compatible #endpoints_compatible #region-us \n", "# opus-mt-ar-en-finetuned-ar-to-en\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on the opus_wikipedia dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.10.0\n- Pytorch 1.9.0+cu102\n- Datasets 1.11.0\n- Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ar-en-finetunedTanzil-v5-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.8101 - Validation Loss: 0.9477 - Train Bleu: 9.3241 - Train Gen Len: 88.73 - Train Rouge1: 56.4906 - Train Rouge2: 34.2668 - Train Rougel: 53.2279 - Train Rougelsum: 53.7836 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Bleu | Train Gen Len | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Epoch | |:----------:|:---------------:|:----------:|:-------------:|:------------:|:------------:|:------------:|:---------------:|:-----:| | 0.8735 | 0.9809 | 11.0863 | 78.68 | 56.4557 | 33.3673 | 53.4828 | 54.1197 | 0 | | 0.8408 | 0.9647 | 9.8543 | 88.955 | 57.3797 | 34.3539 | 53.8783 | 54.3714 | 1 | | 0.8101 | 0.9477 | 9.3241 | 88.73 | 56.4906 | 34.2668 | 53.2279 | 53.7836 | 2 | ### Framework versions - Transformers 4.17.0.dev0 - TensorFlow 2.7.0 - Datasets 1.18.4.dev0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "opus-mt-ar-en-finetunedTanzil-v5-ar-to-en", "results": []}]}
MaryaAI/opus-mt-ar-en-finetunedTanzil-v5-ar-to-en
null
[ "transformers", "tf", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #marian #text2text-generation #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
opus-mt-ar-en-finetunedTanzil-v5-ar-to-en ========================================= This model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.8101 * Validation Loss: 0.9477 * Train Bleu: 9.3241 * Train Gen Len: 88.73 * Train Rouge1: 56.4906 * Train Rouge2: 34.2668 * Train Rougel: 53.2279 * Train Rougelsum: 53.7836 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 2e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.17.0.dev0 * TensorFlow 2.7.0 * Datasets 1.18.4.dev0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* TensorFlow 2.7.0\n* Datasets 1.18.4.dev0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #tf #marian #text2text-generation #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* TensorFlow 2.7.0\n* Datasets 1.18.4.dev0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ar-finetuned-Math-13-10-en-to-ar This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["syssr_en_ar"], "model-index": [{"name": "opus-mt-en-ar-finetuned-Math-13-10-en-to-ar", "results": []}]}
MaryaAI/opus-mt-en-ar-finetuned-Math-13-10-en-to-ar
null
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:syssr_en_ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-syssr_en_ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# opus-mt-en-ar-finetuned-Math-13-10-en-to-ar This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ar on the syssr_en_ar dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.0 - Tokenizers 0.10.3
[ "# opus-mt-en-ar-finetuned-Math-13-10-en-to-ar\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ar on the syssr_en_ar dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.0+cu111\n- Datasets 1.13.0\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-syssr_en_ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# opus-mt-en-ar-finetuned-Math-13-10-en-to-ar\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ar on the syssr_en_ar dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.0+cu111\n- Datasets 1.13.0\n- Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset. It achieves the following results on the evaluation set: - Loss: 1.2046 - Bleu: 7.9946 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 1 | 1.2038 | 7.9946 | 20.0 | | No log | 2.0 | 2 | 1.2038 | 7.9946 | 20.0 | | No log | 3.0 | 3 | 1.2038 | 7.9946 | 20.0 | | No log | 4.0 | 4 | 1.2036 | 7.9946 | 20.0 | | No log | 5.0 | 5 | 1.2046 | 7.9946 | 20.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["syssr_en_ar"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "syssr_en_ar", "type": "syssr_en_ar", "args": "default"}, "metrics": [{"type": "bleu", "value": 7.9946, "name": "Bleu"}]}]}]}
MaryaAI/opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en
null
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:syssr_en_ar", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-syssr_en_ar #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en ================================================ This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ar on the syssr\_en\_ar dataset. It achieves the following results on the evaluation set: * Loss: 1.2046 * Bleu: 7.9946 * Gen Len: 20.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.9.0+cu111 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-syssr_en_ar #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0589 - Validation Loss: 5.3227 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.0589 | 5.3227 | 0 | ### Framework versions - Transformers 4.17.0.dev0 - TensorFlow 2.7.0 - Datasets 1.18.3.dev0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar", "results": []}]}
MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
null
[ "transformers", "tf", "tensorboard", "marian", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #tensorboard #marian #text2text-generation #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar =============================================== This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ar on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 2.0589 * Validation Loss: 5.3227 * Epoch: 0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 2e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.17.0.dev0 * TensorFlow 2.7.0 * Datasets 1.18.3.dev0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* TensorFlow 2.7.0\n* Datasets 1.18.3.dev0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #tf #tensorboard #marian #text2text-generation #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* TensorFlow 2.7.0\n* Datasets 1.18.3.dev0\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ro-finetuned-en-to-ro This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.2886 - Bleu: 28.1599 - Gen Len: 34.1236 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.7437 | 1.0 | 38145 | 1.2886 | 28.1599 | 34.1236 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-en-ro-finetuned-en-to-ro", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 28.1599, "name": "Bleu"}]}]}]}
MaryaAI/opus-mt-en-ro-finetuned-en-to-ro
null
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #model-index #autotrain_compatible #endpoints_compatible #region-us
opus-mt-en-ro-finetuned-en-to-ro ================================ This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ro on the wmt16 dataset. It achieves the following results on the evaluation set: * Loss: 1.2886 * Bleu: 28.1599 * Gen Len: 34.1236 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.10.0 * Pytorch 1.9.0+cu102 * Datasets 1.11.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3" ]
text-generation
transformers
# Rick and Morty DialoGPT Model
{"tags": ["conversational"]}
MathiasVS/DialoGPT-small-RickAndMorty
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
# Rick and Morty DialoGPT Model
[ "# Rick and Morty DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "# Rick and Morty DialoGPT Model" ]
text-classification
transformers
# German BERT for News Classification This a bert-base-german-cased model finetuned for text classification on german news articles ## Training data Used the training set from the 10KGNAD dataset (gnad10 on HuggingFace Datasets).
{"language": ["de"], "tags": ["text-classification", "german-news-classification"], "datasets": ["gnad10"], "metrics": ["accuracy", "precision", "recall", "f1"], "model-index": [{"name": "Mathking/bert-base-german-cased-gnad10", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "gnad10", "type": "gnad10", "config": "default", "split": "train"}, "metrics": [{"type": "accuracy", "value": 0.9557598702001082, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTkxNjAwNTYzYjRjZmQ0M2UxMWQzYzk0YWFjZjRmYzcwNGEyYmRiNDIwNTlmNDNhYjAzNzBmNzU5MTg3MTM1ZSIsInZlcnNpb24iOjF9.1KfABx9YVvR2QiSXwtCBV8ijYGqwiQD3N3i7c1KV2Ke9tQvWA4_HnN7wvCKokESR-zEwIHWfALSveWIgoiSNBg"}, {"type": "f1", "value": 0.9550736462647613, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDNkYjU0NzAxNjBlOGQ1MWU2OGE5NWFkOGFlNTYwZGFkNTRiMDcwNDRlYmNiMTUxMzViM2Q4MmUyMjU2ZTQwYyIsInZlcnNpb24iOjF9.E9ysIc4ZYrpOpQTJsmLRN1q8Pg-5pWLlvs8WbTeJy2JYNmpBNblaGyeiHckZ8g8gD3Rqv7W9inpivmHRcI4-BQ"}, {"type": "f1", "value": 0.9557598702001082, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWMxNmVjMjYyNTAxYmYwN2YxNjAzOWQ2MDY3OGRhYzE4NWYwYTUyNjRhNmU2M2Y3MzFiYzI2ZTk4YWQ3NGNkNSIsInZlcnNpb24iOjF9.csdfLvORGZJY11TbWzylKfhz53BAncrjNgCDIGtWzK1AtJutkJj-SQo8rEd9o3Z5BKlH3Ta28O3Y7wKoc4PuDQ"}, {"type": "f1", "value": 0.9556789875763837, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2I1ZmNjMzViMDY1YWMyNzRkNDY0OTY1YTFkZWViN2JiMDlkMjJjNTZmZDFjZDIxZjA0YzI1NThiODUwMDlhZiIsInZlcnNpb24iOjF9.83yH-SfIAeB9Y3XNPcnn8N3g9puooZRgcBfNMeAKNqNM93U1qEE6JjFvhZBO_UU05cgfqnPp7Pt6h-JQcmdwBA"}, {"type": "precision", "value": 0.953834169384936, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjQ4YjA2MTZlMmYxMTA4ZTM5MDU1NjI3ZWE4YTBiZDBhMDUwN2FiODZkNjM5OWNiNGU2NjU5ZDE0OTUyODZmNyIsInZlcnNpb24iOjF9.sWcghxM9DeaaldnXR5sLz8KUHVhdjJ8GY_c4f-kZ0-0BDzf4CYURUVziWnlrRTjlUH-hVyfdKd1ufHvLotRgCg"}, {"type": "precision", "value": 0.9557598702001082, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWIzZmNlZTcxNzhhMzZhNWQ1ZWI4YzZjMDYyOTMwY2Q5N2EwMzFhMzE4OTFkZjg1NTIyYjVkMGNjZDYwZmQ2YSIsInZlcnNpb24iOjF9.rQ7ZIKeP25hLfHaYdPqX-VZCHoL-YohqGV9NZ-TAIHvNQbj0lPpX_nS89cJ1C0tSoHCeP14lIOWNncRJzQOOCA"}, {"type": "precision", "value": 0.9558822798145145, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDQzOTMxMGQ4YTI5MDUzNjdhNzdjY2QzNGVlNzUyODE4ZTI1MTY4NTkxZDVhMTBjZjhhMjlmNzRiNjEyOTk3NiIsInZlcnNpb24iOjF9.DWBZXL1mP7oNYQJKCORItDvkZm-l7TcIETNjdeVyS0BnxoEbqEE22OOJwnGLAk-wHtfx7jEKAA7ijQ1qF7cfAg"}, {"type": "recall", "value": 0.956651983810566, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTFhYTUyZWQ0N2VhOWQxMjY0MGM1ZjExOGE4NDQ5ODMzMmQ5YThkZTYzZjg0YmUwMDhlZDllMDk3MzY2ZWUzZSIsInZlcnNpb24iOjF9.H7UhmKtJ_5FZOQmZP-wPTrHHde-XxtMAj3kluHz6-8P1KOwJkxk24Lu7vTwHf3564XtnJC8eW2C5uyWDTpcgBg"}, {"type": "recall", "value": 0.9557598702001082, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGY1MWZkOWYzNjg1NGU5YmFmODY2MDNjYWQ3OTUwNTgzMWRlZGUwNzU5NDY2NzFjZTMxOTBiMWVhZWIyNDYzMCIsInZlcnNpb24iOjF9.oKQ0zRYEs-sloah-BJvBKX5SFqWt8UX-0jCi3ldaLwNVJjM-rcdvsERyoYQ-QTLPKsZp4nko3-ic-BDCwGp9Bw"}, {"type": "recall", "value": 0.9557598702001082, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDlhMmIwOTBkOTIzOTlkZjNiMzlkMmE5NzQ3MzY5NTUxODQyMzY1OTJjNWY4NjI0N2NjYmY5NjkwZjU0MTA1YyIsInZlcnNpb24iOjF9.4FExU6skNNcvIrToS3MR04Q7ho7_PITTqPk8WMdOggaVvnwj8ujxcXyJMSRioQ1ttVlpg_oGismsSD9zttYkBg"}, {"type": "loss", "value": 0.17337004840373993, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzVmMmQ5OGE0OTU3MTg0NDg4YzhlODU1NWUyODM0NzFjODM3MTY5MWI2OTAyMzU5OTQ2YTljZTJkN2JkYTcyNSIsInZlcnNpb24iOjF9.jeYTrX35vtswkWi8ROqynY_W4rHfxonic74PviTNAKJzTF7tUCI2a9IBavXvSQhMfGv0NEkZzX8N8o4hQTvWDw"}]}]}]}
laiking/bert-base-german-cased-gnad10
null
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "german-news-classification", "de", "dataset:gnad10", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #safetensors #bert #text-classification #german-news-classification #de #dataset-gnad10 #model-index #autotrain_compatible #endpoints_compatible #region-us
# German BERT for News Classification This a bert-base-german-cased model finetuned for text classification on german news articles ## Training data Used the training set from the 10KGNAD dataset (gnad10 on HuggingFace Datasets).
[ "# German BERT for News Classification\n\nThis a bert-base-german-cased model finetuned for text classification on german news articles", "## Training data\nUsed the training set from the 10KGNAD dataset (gnad10 on HuggingFace Datasets)." ]
[ "TAGS\n#transformers #pytorch #safetensors #bert #text-classification #german-news-classification #de #dataset-gnad10 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "# German BERT for News Classification\n\nThis a bert-base-german-cased model finetuned for text classification on german news articles", "## Training data\nUsed the training set from the 10KGNAD dataset (gnad10 on HuggingFace Datasets)." ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-nl-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - NL dataset. It achieves the following results on the evaluation set: - Loss: 0.3523 - Wer: 0.2046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0536 | 1.12 | 500 | 0.5349 | 0.4338 | | 0.2543 | 2.24 | 1000 | 0.3859 | 0.3029 | | 0.1472 | 3.36 | 1500 | 0.3471 | 0.2818 | | 0.1088 | 4.47 | 2000 | 0.3489 | 0.2731 | | 0.0855 | 5.59 | 2500 | 0.3582 | 0.2558 | | 0.0721 | 6.71 | 3000 | 0.3457 | 0.2471 | | 0.0653 | 7.83 | 3500 | 0.3299 | 0.2357 | | 0.0527 | 8.95 | 4000 | 0.3440 | 0.2334 | | 0.0444 | 10.07 | 4500 | 0.3417 | 0.2289 | | 0.0404 | 11.19 | 5000 | 0.3691 | 0.2204 | | 0.0345 | 12.3 | 5500 | 0.3453 | 0.2102 | | 0.0288 | 13.42 | 6000 | 0.3634 | 0.2089 | | 0.027 | 14.54 | 6500 | 0.3532 | 0.2044 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-common_voice-nl-demo", "results": []}]}
MatsUy/wav2vec2-common_voice-nl-demo
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "nl", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "nl" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #nl #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-common\_voice-nl-demo ============================== This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the COMMON\_VOICE - NL dataset. It achieves the following results on the evaluation set: * Loss: 0.3523 * Wer: 0.2046 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 15.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu102 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #nl #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 15.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 4 This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1243 - Precision: 0.5220 - Recall: 0.6137 - F1: 0.5641 - Accuracy: 0.9630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 134 | 0.1357 | 0.4549 | 0.5521 | 0.4988 | 0.9574 | | No log | 2.0 | 268 | 0.1243 | 0.5220 | 0.6137 | 0.5641 | 0.9630 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "4", "results": []}]}
Matthijsvanhof/4
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
4 = This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1243 * Precision: 0.5220 * Recall: 0.6137 * F1: 0.5641 * Accuracy: 0.9630 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-dutch-cased-finetuned-NER This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1078 - Precision: 0.6129 - Recall: 0.6639 - F1: 0.6374 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 267 | 0.1131 | 0.6090 | 0.6264 | 0.6176 | 0.9678 | | 0.1495 | 2.0 | 534 | 0.1078 | 0.6129 | 0.6639 | 0.6374 | 0.9688 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-dutch-cased-finetuned-NER", "results": []}]}
Matthijsvanhof/bert-base-dutch-cased-finetuned-NER
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
bert-base-dutch-cased-finetuned-NER =================================== This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1078 * Precision: 0.6129 * Recall: 0.6639 * F1: 0.6374 * Accuracy: 0.9688 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-dutch-cased-finetuned-NER8 This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1482 - Precision: 0.4716 - Recall: 0.4359 - F1: 0.4530 - Accuracy: 0.9569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 68 | 0.1705 | 0.3582 | 0.3488 | 0.3535 | 0.9475 | | No log | 2.0 | 136 | 0.1482 | 0.4716 | 0.4359 | 0.4530 | 0.9569 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-dutch-cased-finetuned-NER8", "results": []}]}
Matthijsvanhof/bert-base-dutch-cased-finetuned-NER8
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
bert-base-dutch-cased-finetuned-NER8 ==================================== This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1482 * Precision: 0.4716 * Recall: 0.4359 * F1: 0.4530 * Accuracy: 0.9569 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-dutch-cased-finetuned-mBERT This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0898 - Precision: 0.7255 - Recall: 0.7255 - F1: 0.7255 - Accuracy: 0.9758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1603 | 1.0 | 533 | 0.0928 | 0.6896 | 0.6962 | 0.6929 | 0.9742 | | 0.0832 | 2.0 | 1066 | 0.0898 | 0.7255 | 0.7255 | 0.7255 | 0.9758 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-dutch-cased-finetuned-mBERT", "results": []}]}
Matthijsvanhof/bert-base-dutch-cased-finetuned-mBERT
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-dutch-cased-finetuned-mBERT ===================================== This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.0898 * Precision: 0.7255 * Recall: 0.7255 * F1: 0.7255 * Accuracy: 0.9758 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.10.3" ]
feature-extraction
transformers
This repository shares smaller version of bert-base-multilingual-uncased that keeps only Ukrainian, English, and Russian tokens in the vocabulary. | Model | Num parameters | Size | | ----------------------------------------- | -------------- | --------- | | bert-base-multilingual-uncased | 167 million | ~650 MB | | MaxVortman/bert-base-ukr-eng-rus-uncased | 110 million | ~423 MB |
{}
mshamrai/bert-base-ukr-eng-rus-uncased
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
This repository shares smaller version of bert-base-multilingual-uncased that keeps only Ukrainian, English, and Russian tokens in the vocabulary. Model: bert-base-multilingual-uncased, Num parameters: 167 million, Size: ~650 MB Model: MaxVortman/bert-base-ukr-eng-rus-uncased, Num parameters: 110 million, Size: ~423 MB
[]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n" ]
text-generation
transformers
#Rick and Morty DialoGPT Model
{"tags": ["conversational"]}
MaxW0748/DialoGPT-small-Rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#Rick and Morty DialoGPT Model
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
hello
{}
Maya/essai1
null
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
hello
[]
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
MayankGupta/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
automatic-speech-recognition
transformers
# wav2vec2-large-xlsr-53-Czech Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "cs", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Czech test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "cs", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 27.047806 % ## Training The Common Voice `train`, `validation` datasets were used for training.
{"language": "cs", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-Czech by Mehdi Hosseini Moghadam", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice cs", "type": "common_voice", "args": "cs"}, "metrics": [{"type": "wer", "value": 27.047806, "name": "Test WER"}]}]}]}
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Czech
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "cs", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "cs" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #cs #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-large-xlsr-53-Czech Fine-tuned facebook/wav2vec2-large-xlsr-53 in Czech using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Czech test data of Common Voice. Test Result: 27.047806 % ## Training The Common Voice 'train', 'validation' datasets were used for training.
[ "# wav2vec2-large-xlsr-53-Czech\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Czech using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Czech test data of Common Voice.\n\n\n\nTest Result: 27.047806 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #cs #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-large-xlsr-53-Czech\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Czech using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Czech test data of Common Voice.\n\n\n\nTest Result: 27.047806 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# wav2vec2-large-xlsr-53-Dutch Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Dutch using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "nl", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Dutch test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "nl", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch") model.to("cuda") chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 26.494162 % ## Training The Common Voice `train`, `validation` datasets were used for training.
{"language": "nl", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-Dutch by Mehdi Hosseini Moghadam", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice nl", "type": "common_voice", "args": "nl"}, "metrics": [{"type": "wer", "value": 26.494162, "name": "Test WER"}]}]}]}
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "nl", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "nl" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #nl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-large-xlsr-53-Dutch Fine-tuned facebook/wav2vec2-large-xlsr-53 in Dutch using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Dutch test data of Common Voice. Test Result: 26.494162 % ## Training The Common Voice 'train', 'validation' datasets were used for training.
[ "# wav2vec2-large-xlsr-53-Dutch\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Dutch using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Dutch test data of Common Voice.\n\n\n\nTest Result: 26.494162 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #nl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-large-xlsr-53-Dutch\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Dutch using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Dutch test data of Common Voice.\n\n\n\nTest Result: 26.494162 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# wav2vec2-large-xlsr-53-French Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in French using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "fr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the French test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fr", split="test[:10%]") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 34.856015 % ## Training 10% of the Common Voice `train`, `validation` datasets were used for training. ## Testing 10% of the Common Voice `Test` dataset were used for training.
{"language": "fr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-French by Mehdi Hosseini Moghadam", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fr", "type": "common_voice", "args": "fr"}, "metrics": [{"type": "wer", "value": 34.856015, "name": "Test WER"}]}]}]}
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-French
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "fr", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-large-xlsr-53-French Fine-tuned facebook/wav2vec2-large-xlsr-53 in French using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the French test data of Common Voice. Test Result: 34.856015 % ## Training 10% of the Common Voice 'train', 'validation' datasets were used for training. ## Testing 10% of the Common Voice 'Test' dataset were used for training.
[ "# wav2vec2-large-xlsr-53-French \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in French using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the French test data of Common Voice.\n\n\n\nTest Result: 34.856015 %", "## Training\n\n10% of the Common Voice 'train', 'validation' datasets were used for training.", "## Testing\n\n10% of the Common Voice 'Test' dataset were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #fr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-large-xlsr-53-French \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in French using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the French test data of Common Voice.\n\n\n\nTest Result: 34.856015 %", "## Training\n\n10% of the Common Voice 'train', 'validation' datasets were used for training.", "## Testing\n\n10% of the Common Voice 'Test' dataset were used for training." ]
automatic-speech-recognition
transformers
# wav2vec2-large-xlsr-53-Georgian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ka", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Georgian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ka", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 60.504024 % ## Training The Common Voice `train`, `validation` datasets were used for training.
{"language": "ka", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-Georgian by Mehdi Hosseini Moghadam", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ka", "type": "common_voice", "args": "ka"}, "metrics": [{"type": "wer", "value": 60.504024, "name": "Test WER"}]}]}]}
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ka", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ka" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ka #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-large-xlsr-53-Georgian Fine-tuned facebook/wav2vec2-large-xlsr-53 in Georgian using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Georgian test data of Common Voice. Test Result: 60.504024 % ## Training The Common Voice 'train', 'validation' datasets were used for training.
[ "# wav2vec2-large-xlsr-53-Georgian \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Georgian using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Georgian test data of Common Voice.\n\n\n\nTest Result: 60.504024 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ka #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-large-xlsr-53-Georgian \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Georgian using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Georgian test data of Common Voice.\n\n\n\nTest Result: 60.504024 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
automatic-speech-recognition
transformers
# wav2vec2-large-xlsr-53-German Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in German using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "de", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Czech test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "de", split="test[:15%]") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 25.284593 % ## Training 10% of the Common Voice `train`, `validation` datasets were used for training. ## Testing 15% of the Common Voice `Test` dataset were used for training.
{"language": "de", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-German by Mehdi Hosseini Moghadam", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice de", "type": "common_voice", "args": "de"}, "metrics": [{"type": "wer", "value": 25.284593, "name": "Test WER"}]}]}]}
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-German
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "de", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "de" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #de #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-large-xlsr-53-German Fine-tuned facebook/wav2vec2-large-xlsr-53 in German using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Czech test data of Common Voice. Test Result: 25.284593 % ## Training 10% of the Common Voice 'train', 'validation' datasets were used for training. ## Testing 15% of the Common Voice 'Test' dataset were used for training.
[ "# wav2vec2-large-xlsr-53-German\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in German using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Czech test data of Common Voice.\n\n\n\nTest Result: 25.284593 %", "## Training\n\n10% of the Common Voice 'train', 'validation' datasets were used for training.", "## Testing\n\n15% of the Common Voice 'Test' dataset were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #de #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-large-xlsr-53-German\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in German using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Czech test data of Common Voice.\n\n\n\nTest Result: 25.284593 %", "## Training\n\n10% of the Common Voice 'train', 'validation' datasets were used for training.", "## Testing\n\n15% of the Common Voice 'Test' dataset were used for training." ]
automatic-speech-recognition
transformers
# wav2vec2-large-xlsr-53-Swedish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Swedish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "sv-SE", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 41.388337 % ## Training The Common Voice `train`, `validation` datasets were used for training.
{"language": "sv-SE", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-Swedish by Mehdi Hosseini Moghadam", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sv-SE", "type": "common_voice", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 41.388337, "name": "Test WER"}]}]}]}
MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Swedish
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "sv-SE" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# wav2vec2-large-xlsr-53-Swedish Fine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ## Evaluation The model can be evaluated as follows on the Swedish test data of Common Voice. Test Result: 41.388337 % ## Training The Common Voice 'train', 'validation' datasets were used for training.
[ "# wav2vec2-large-xlsr-53-Swedish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Swedish test data of Common Voice.\n\n\n\nTest Result: 41.388337 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# wav2vec2-large-xlsr-53-Swedish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Swedish using the Common Voice\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\n\nThe model can be used directly (without a language model) as follows:", "## Evaluation\n\nThe model can be evaluated as follows on the Swedish test data of Common Voice.\n\n\n\nTest Result: 41.388337 %", "## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training." ]
text-generation
transformers
# GPT-2 Story Generator ## Model description Generate a short story from an input prompt. Put the vocab ` [endprompt]` after your input. Example of an input: ``` A person with a high school education gets sent back into the 1600s and tries to explain science and technology to the people. [endprompt] ``` #### Limitations and bias The data we used to train was collected from reddit, so it could be very biased towards young, white, male demographic. ## Training data The data was collected from scraping reddit.
{"language": ["en"], "tags": ["gpt2", "text-generation"], "pipeline_tag": "text-generation", "widget": [{"text": "A person with a high school education gets sent back into the 1600s and tries to explain science and technology to the people. [endprompt]"}, {"text": "A kid doodling in a math class accidentally creates the world's first functional magic circle in centuries. [endprompt]"}]}
Meli/GPT2-Prompt
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #jax #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# GPT-2 Story Generator ## Model description Generate a short story from an input prompt. Put the vocab ' [endprompt]' after your input. Example of an input: #### Limitations and bias The data we used to train was collected from reddit, so it could be very biased towards young, white, male demographic. ## Training data The data was collected from scraping reddit.
[ "# GPT-2 Story Generator", "## Model description\n\nGenerate a short story from an input prompt.\n\nPut the vocab ' [endprompt]' after your input.\n\nExample of an input:", "#### Limitations and bias\n\nThe data we used to train was collected from reddit, so it could be very biased towards young, white, male demographic.", "## Training data\n\nThe data was collected from scraping reddit." ]
[ "TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# GPT-2 Story Generator", "## Model description\n\nGenerate a short story from an input prompt.\n\nPut the vocab ' [endprompt]' after your input.\n\nExample of an input:", "#### Limitations and bias\n\nThe data we used to train was collected from reddit, so it could be very biased towards young, white, male demographic.", "## Training data\n\nThe data was collected from scraping reddit." ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6324 - Matthews Correlation: 0.5207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5245 | 1.0 | 535 | 0.5155 | 0.4181 | | 0.3446 | 2.0 | 1070 | 0.5623 | 0.4777 | | 0.2331 | 3.0 | 1605 | 0.6324 | 0.5207 | | 0.1678 | 4.0 | 2140 | 0.7706 | 0.5106 | | 0.1255 | 5.0 | 2675 | 0.8852 | 0.4998 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5206791471093309, "name": "Matthews Correlation"}]}]}]}
MelissaTESSA/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.6324 * Matthews Correlation: 0.5207 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.0+cu111 * Datasets 1.18.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3" ]
null
null
Gggg
{}
Mervtttt/Ges
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
Gggg
[]
[ "TAGS\n#region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2663 - Accuracy: 0.9461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.1991 | 1.0 | 318 | 3.1495 | 0.7523 | | 2.4112 | 2.0 | 636 | 1.5868 | 0.8510 | | 1.1887 | 3.0 | 954 | 0.7975 | 0.9203 | | 0.5952 | 4.0 | 1272 | 0.4870 | 0.9319 | | 0.3275 | 5.0 | 1590 | 0.3571 | 0.9419 | | 0.2066 | 6.0 | 1908 | 0.3070 | 0.9429 | | 0.1456 | 7.0 | 2226 | 0.2809 | 0.9448 | | 0.1154 | 8.0 | 2544 | 0.2697 | 0.9468 | | 0.1011 | 9.0 | 2862 | 0.2663 | 0.9461 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9461290322580646, "name": "Accuracy"}]}]}]}
MhF/distilbert-base-uncased-distilled-clinc
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-distilled-clinc ======================================= This model is a fine-tuned version of distilbert-base-uncased on the clinc\_oos dataset. It achieves the following results on the evaluation set: * Loss: 0.2663 * Accuracy: 0.9461 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 48 * eval\_batch\_size: 48 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 9 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 9", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 9", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7703 - Accuracy: 0.9187 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 | | 2.6309 | 2.0 | 636 | 1.8797 | 0.8310 | | 1.5443 | 3.0 | 954 | 1.1537 | 0.8974 | | 1.0097 | 4.0 | 1272 | 0.8560 | 0.9135 | | 0.7918 | 5.0 | 1590 | 0.7703 | 0.9187 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["clinc_oos"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos", "args": "plus"}, "metrics": [{"type": "accuracy", "value": 0.9187096774193548, "name": "Accuracy"}]}]}]}
MhF/distilbert-base-uncased-finetuned-clinc
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
distilbert-base-uncased-finetuned-clinc ======================================= This model is a fine-tuned version of distilbert-base-uncased on the clinc\_oos dataset. It achieves the following results on the evaluation set: * Loss: 0.7703 * Accuracy: 0.9187 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 48 * eval\_batch\_size: 48 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-clinc_oos #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2232 - Accuracy: 0.9215 - F1: 0.9218 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8098 | 1.0 | 250 | 0.3138 | 0.9025 | 0.9001 | | 0.2429 | 2.0 | 500 | 0.2232 | 0.9215 | 0.9218 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9215, "name": "Accuracy"}, {"type": "f1", "value": 0.9217985126397109, "name": "F1"}]}]}]}
MhF/distilbert-base-uncased-finetuned-emotion
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-emotion ========================================= This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: * Loss: 0.2232 * Accuracy: 0.9215 * F1: 0.9218 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1753 - F1: 0.8520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2989 | 1.0 | 835 | 0.1878 | 0.8123 | | 0.1548 | 2.0 | 1670 | 0.1745 | 0.8480 | | 0.1012 | 3.0 | 2505 | 0.1753 | 0.8520 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-all", "results": []}]}
MhF/xlm-roberta-base-finetuned-panx-all
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-all =================================== This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1753 * F1: 0.8520 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu113 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1576 - F1: 0.8571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2924 | 1.0 | 715 | 0.1819 | 0.8286 | | 0.1503 | 2.0 | 1430 | 0.1580 | 0.8511 | | 0.0972 | 3.0 | 2145 | 0.1576 | 0.8571 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de-fr", "results": []}]}
MhF/xlm-roberta-base-finetuned-panx-de-fr
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-de-fr ===================================== This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1576 * F1: 0.8571 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu113 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1354 - F1: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.254 | 1.0 | 525 | 0.1652 | 0.8254 | | 0.1293 | 2.0 | 1050 | 0.1431 | 0.8489 | | 0.0797 | 3.0 | 1575 | 0.1354 | 0.8621 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.de"}, "metrics": [{"type": "f1", "value": 0.862053266560437, "name": "F1"}]}]}]}
MhF/xlm-roberta-base-finetuned-panx-de
null
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-de ================================== This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset. It achieves the following results on the evaluation set: * Loss: 0.1354 * F1: 0.8621 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu113 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3856 - F1: 0.6808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1038 | 1.0 | 50 | 0.5255 | 0.5331 | | 0.4922 | 2.0 | 100 | 0.4377 | 0.6379 | | 0.3664 | 3.0 | 150 | 0.3856 | 0.6808 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-en", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.en"}, "metrics": [{"type": "f1", "value": 0.6807563959955506, "name": "F1"}]}]}]}
MhF/xlm-roberta-base-finetuned-panx-en
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-en ================================== This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset. It achieves the following results on the evaluation set: * Loss: 0.3856 * F1: 0.6808 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu113 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2736 - F1: 0.8353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5826 | 1.0 | 191 | 0.3442 | 0.7888 | | 0.2669 | 2.0 | 382 | 0.2848 | 0.8326 | | 0.1818 | 3.0 | 573 | 0.2736 | 0.8353 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-fr", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.fr"}, "metrics": [{"type": "f1", "value": 0.8353494623655915, "name": "F1"}]}]}]}
MhF/xlm-roberta-base-finetuned-panx-fr
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-fr ================================== This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset. It achieves the following results on the evaluation set: * Loss: 0.2736 * F1: 0.8353 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu113 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2491 - F1: 0.8213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8192 | 1.0 | 70 | 0.3300 | 0.7184 | | 0.2949 | 2.0 | 140 | 0.2817 | 0.7959 | | 0.189 | 3.0 | 210 | 0.2491 | 0.8213 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["xtreme"], "metrics": ["f1"], "model-index": [{"name": "xlm-roberta-base-finetuned-panx-it", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "xtreme", "type": "xtreme", "args": "PAN-X.it"}, "metrics": [{"type": "f1", "value": 0.8213114754098361, "name": "F1"}]}]}]}
MhF/xlm-roberta-base-finetuned-panx-it
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-it ================================== This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset. It achieves the following results on the evaluation set: * Loss: 0.2491 * F1: 0.8213 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu113 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #generated_from_trainer #dataset-xtreme #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-generation
transformers
# feinschwarz This model is a fine-tuned version of [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2). The dataset was compiled from all texts of https://www.feinschwarz.net (as of October 2021). The homepage gathers essayistic texts on theological topics. The model will be used to explore the challenges of text-generating AI for theology with a hands on approach. Can an AI generate theological knowledge? Is a text by Karl Rahner of more value than an AI-generated text? Can we even distinguish a Rahner text from an AI-generated text in the future? And the crucial question: Would it be bad if not? The model is a very first attempt and in its current version certainly not yet a danger for academic theology 🤓 # Using the model You can create text with the model using this code: ```python from transformers import pipeline pipe = pipeline('text-generation', model="Michael711/feinschwarz", tokenizer="Michael711/feinschwarz") text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"] print(text) ``` Have fun theologizing!
{"license": "mit", "tags": ["generated_from_trainer", "de"], "model-index": [{"name": "feinesblack", "results": []}]}
Michael711/feinschwarz
null
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #generated_from_trainer #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# feinschwarz This model is a fine-tuned version of dbmdz/german-gpt2. The dataset was compiled from all texts of URL (as of October 2021). The homepage gathers essayistic texts on theological topics. The model will be used to explore the challenges of text-generating AI for theology with a hands on approach. Can an AI generate theological knowledge? Is a text by Karl Rahner of more value than an AI-generated text? Can we even distinguish a Rahner text from an AI-generated text in the future? And the crucial question: Would it be bad if not? The model is a very first attempt and in its current version certainly not yet a danger for academic theology # Using the model You can create text with the model using this code: Have fun theologizing!
[ "# feinschwarz\n\nThis model is a fine-tuned version of dbmdz/german-gpt2. The dataset was compiled from all texts of URL (as of October 2021). The homepage gathers essayistic texts on theological topics.\n\nThe model will be used to explore the challenges of text-generating AI for theology with a hands on approach. Can an AI generate theological knowledge? Is a text by Karl Rahner of more value than an AI-generated text? Can we even distinguish a Rahner text from an AI-generated text in the future? And the crucial question: Would it be bad if not?\n\nThe model is a very first attempt and in its current version certainly not yet a danger for academic theology", "# Using the model\n\nYou can create text with the model using this code:\n\n\n\nHave fun theologizing!" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# feinschwarz\n\nThis model is a fine-tuned version of dbmdz/german-gpt2. The dataset was compiled from all texts of URL (as of October 2021). The homepage gathers essayistic texts on theological topics.\n\nThe model will be used to explore the challenges of text-generating AI for theology with a hands on approach. Can an AI generate theological knowledge? Is a text by Karl Rahner of more value than an AI-generated text? Can we even distinguish a Rahner text from an AI-generated text in the future? And the crucial question: Would it be bad if not?\n\nThe model is a very first attempt and in its current version certainly not yet a danger for academic theology", "# Using the model\n\nYou can create text with the model using this code:\n\n\n\nHave fun theologizing!" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
MichaelTheLearner/DialoGPT-medium-harry
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text2text-generation
transformers
## About the model The model has been trained on a collection of 500k articles with headings. Its purpose is to create a one-line heading suitable for the given article. Sample code with a WikiNews article: ```python import torch from transformers import T5ForConditionalGeneration,T5Tokenizer device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = T5ForConditionalGeneration.from_pretrained("Michau/t5-base-en-generate-headline") tokenizer = T5Tokenizer.from_pretrained("Michau/t5-base-en-generate-headline") model = model.to(device) article = ''' Very early yesterday morning, the United States President Donald Trump reported he and his wife First Lady Melania Trump tested positive for COVID-19. Officials said the Trumps' 14-year-old son Barron tested negative as did First Family and Senior Advisors Jared Kushner and Ivanka Trump. Trump took to social media, posting at 12:54 am local time (0454 UTC) on Twitter, "Tonight, [Melania] and I tested positive for COVID-19. We will begin our quarantine and recovery process immediately. We will get through this TOGETHER!" Yesterday afternoon Marine One landed on the White House's South Lawn flying Trump to Walter Reed National Military Medical Center (WRNMMC) in Bethesda, Maryland. Reports said both were showing "mild symptoms". Senior administration officials were tested as people were informed of the positive test. Senior advisor Hope Hicks had tested positive on Thursday. Presidential physician Sean Conley issued a statement saying Trump has been given zinc, vitamin D, Pepcid and a daily Aspirin. Conley also gave a single dose of the experimental polyclonal antibodies drug from Regeneron Pharmaceuticals. According to official statements, Trump, now operating from the WRNMMC, is to continue performing his duties as president during a 14-day quarantine. In the event of Trump becoming incapacitated, Vice President Mike Pence could take over the duties of president via the 25th Amendment of the US Constitution. The Pence family all tested negative as of yesterday and there were no changes regarding Pence's campaign events. ''' text = "headline: " + article max_len = 256 encoding = tokenizer.encode_plus(text, return_tensors = "pt") input_ids = encoding["input_ids"].to(device) attention_masks = encoding["attention_mask"].to(device) beam_outputs = model.generate( input_ids = input_ids, attention_mask = attention_masks, max_length = 64, num_beams = 3, early_stopping = True, ) result = tokenizer.decode(beam_outputs[0]) print(result) ``` Result: ```Trump and First Lady Melania Test Positive for COVID-19```
{}
Michau/t5-base-en-generate-headline
null
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
## About the model The model has been trained on a collection of 500k articles with headings. Its purpose is to create a one-line heading suitable for the given article. Sample code with a WikiNews article: Result:
[ "## About the model\n\nThe model has been trained on a collection of 500k articles with headings. Its purpose is to create a one-line heading suitable for the given article.\n\nSample code with a WikiNews article:\n\n\n\nResult:" ]
[ "TAGS\n#transformers #pytorch #tf #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "## About the model\n\nThe model has been trained on a collection of 500k articles with headings. Its purpose is to create a one-line heading suitable for the given article.\n\nSample code with a WikiNews article:\n\n\n\nResult:" ]
text-generation
transformers
#harry
{"tags": ["conversational"]}
Mierln/SmartHarry
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#harry
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Edward Elric DialoGPT Model
{"tags": ["conversational"]}
MightyCoderX/DialoGPT-medium-EdwardElric
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Edward Elric DialoGPT Model
[ "# Edward Elric DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Edward Elric DialoGPT Model" ]
fill-mask
transformers
kcbert-mlm-finetune
{}
stresscaptor/kcbert-mlm-finetune
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
kcbert-mlm-finetune
[]
[ "TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-classification
transformers
# FEEL-IT: Emotion and Sentiment Classification for the Italian Language ## FEEL-IT Python Package You can find the package that uses this model for emotion and sentiment classification **[here](https://github.com/MilaNLProc/feel-it)** it is meant to be a very simple interface over HuggingFace models. ## License Users should refer to the [following license](https://developer.twitter.com/en/developer-terms/commercial-terms) ## Abstract Sentiment analysis is a common task to understand people's reactions online. Still, we often need more nuanced information: is the post negative because the user is angry or because they are sad? An abundance of approaches has been introduced for tackling both tasks. However, at least for Italian, they all treat only one of the tasks at a time. We introduce *FEEL-IT*, a novel benchmark corpus of Italian Twitter posts annotated with four basic emotions: **anger, fear, joy, sadness**. By collapsing them, we can also do **sentiment analysis**. We evaluate our corpus on benchmark datasets for both emotion and sentiment classification, obtaining competitive results. We release an [open-source Python library](https://github.com/MilaNLProc/feel-it), so researchers can use a model trained on FEEL-IT for inferring both sentiments and emotions from Italian text. | Model | Download | | ------ | -------------------------| | `feel-it-italian-sentiment` | [Link](https://huggingface.co/MilaNLProc/feel-it-italian-sentiment) | | `feel-it-italian-emotion` | [Link](https://huggingface.co/MilaNLProc/feel-it-italian-emotion) | ## Model The *feel-it-italian-emotion* model performs **emotion classification (joy, fear, anger, sadness)** on Italian. We fine-tuned the [UmBERTo model](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) on our new dataset (i.e., FEEL-IT) obtaining state-of-the-art performances on different benchmark corpora. ## Data Our data has been collected by annotating tweets from a broad range of topics. In total, we have 2037 tweets annotated with an emotion label. More details can be found in our paper (https://aclanthology.org/2021.wassa-1.8/). ## Performance We evaluate our performance using [MultiEmotions-It](http://ceur-ws.org/Vol-2769/paper_08.pdf). This dataset differs from FEEL-IT both in terms of topic variety and considered social media (i.e., YouTube and Facebook). We considered only the subset of emotions present in FEEL-IT. To give a point of reference, we also show the Most Frequent Class (MFC) baseline results. The results show that training on FEEL-IT brings stable performance even on datasets from different contexts. | Training Dataset | Macro-F1 | Accuracy | ------ | ------ |------ | | MFC | 0.20 | 0.64 | | FEEL-IT | **0.57** | **0.73** | ## Usage ```python from transformers import pipeline classifier = pipeline("text-classification",model='MilaNLProc/feel-it-italian-emotion',top_k=2) prediction = classifier("Oggi sono proprio contento!") print(prediction) ``` ## Citation Please use the following bibtex entry if you use this model in your project: ``` @inproceedings{bianchi2021feel, title = {{"FEEL-IT: Emotion and Sentiment Classification for the Italian Language"}}, author = "Bianchi, Federico and Nozza, Debora and Hovy, Dirk", booktitle = "Proceedings of the 11th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", year = "2021", publisher = "Association for Computational Linguistics", } ```
{"language": "it", "tags": ["sentiment", "emotion", "Italian"]}
MilaNLProc/feel-it-italian-emotion
null
[ "transformers", "pytorch", "tf", "camembert", "text-classification", "sentiment", "emotion", "Italian", "it", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #tf #camembert #text-classification #sentiment #emotion #Italian #it #autotrain_compatible #endpoints_compatible #has_space #region-us
FEEL-IT: Emotion and Sentiment Classification for the Italian Language ====================================================================== FEEL-IT Python Package ---------------------- You can find the package that uses this model for emotion and sentiment classification here it is meant to be a very simple interface over HuggingFace models. License ------- Users should refer to the following license Abstract -------- Sentiment analysis is a common task to understand people's reactions online. Still, we often need more nuanced information: is the post negative because the user is angry or because they are sad? An abundance of approaches has been introduced for tackling both tasks. However, at least for Italian, they all treat only one of the tasks at a time. We introduce *FEEL-IT*, a novel benchmark corpus of Italian Twitter posts annotated with four basic emotions: anger, fear, joy, sadness. By collapsing them, we can also do sentiment analysis. We evaluate our corpus on benchmark datasets for both emotion and sentiment classification, obtaining competitive results. We release an open-source Python library, so researchers can use a model trained on FEEL-IT for inferring both sentiments and emotions from Italian text. Model ----- The *feel-it-italian-emotion* model performs emotion classification (joy, fear, anger, sadness) on Italian. We fine-tuned the UmBERTo model on our new dataset (i.e., FEEL-IT) obtaining state-of-the-art performances on different benchmark corpora. Data ---- Our data has been collected by annotating tweets from a broad range of topics. In total, we have 2037 tweets annotated with an emotion label. More details can be found in our paper (URL Performance ----------- We evaluate our performance using MultiEmotions-It. This dataset differs from FEEL-IT both in terms of topic variety and considered social media (i.e., YouTube and Facebook). We considered only the subset of emotions present in FEEL-IT. To give a point of reference, we also show the Most Frequent Class (MFC) baseline results. The results show that training on FEEL-IT brings stable performance even on datasets from different contexts. Training Dataset: MFC, Macro-F1: 0.20, Accuracy: 0.64 Training Dataset: FEEL-IT, Macro-F1: 0.57, Accuracy: 0.73 Usage ----- Please use the following bibtex entry if you use this model in your project:
[]
[ "TAGS\n#transformers #pytorch #tf #camembert #text-classification #sentiment #emotion #Italian #it #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-classification
transformers
# FEEL-IT: Emotion and Sentiment Classification for the Italian Language ## FEEL-IT Python Package You can find the package that uses this model for emotion and sentiment classification **[here](https://github.com/MilaNLProc/feel-it)** it is meant to be a very simple interface over HuggingFace models. ## License Users should refer to the [following license](https://developer.twitter.com/en/developer-terms/commercial-terms) ## Abstract Sentiment analysis is a common task to understand people's reactions online. Still, we often need more nuanced information: is the post negative because the user is angry or because they are sad? An abundance of approaches has been introduced for tackling both tasks. However, at least for Italian, they all treat only one of the tasks at a time. We introduce *FEEL-IT*, a novel benchmark corpus of Italian Twitter posts annotated with four basic emotions: **anger, fear, joy, sadness**. By collapsing them, we can also do **sentiment analysis**. We evaluate our corpus on benchmark datasets for both emotion and sentiment classification, obtaining competitive results. We release an [open-source Python library](https://github.com/MilaNLProc/feel-it), so researchers can use a model trained on FEEL-IT for inferring both sentiments and emotions from Italian text. | Model | Download | | ------ | -------------------------| | `feel-it-italian-sentiment` | [Link](https://huggingface.co/MilaNLProc/feel-it-italian-sentiment) | | `feel-it-italian-emotion` | [Link](https://huggingface.co/MilaNLProc/feel-it-italian-emotion) | ## Model The *feel-it-italian-sentiment* model performs **sentiment analysis** on Italian. We fine-tuned the [UmBERTo model](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) on our new dataset (i.e., FEEL-IT) obtaining state-of-the-art performances on different benchmark corpora. ## Data Our data has been collected by annotating tweets from a broad range of topics. In total, we have 2037 tweets annotated with an emotion label. More details can be found in our paper (https://aclanthology.org/2021.wassa-1.8/). ## Performance We evaluate our performance using [SENTIPOLC16 Evalita](http://www.di.unito.it/~tutreeb/sentipolc-evalita16/). We collapsed the FEEL-IT classes into 2 by mapping joy to the *positive* class and anger, fear and sadness into the *negative* class. We compare three different experimental configurations training on FEEL-IT, SENTIPOLC16, or both by testing on the SENTIPOLC16 test set. The results show that training on FEEL-IT can provide better results on the SENTIPOLC16 test set than those that can be obtained with the SENTIPOLC16 training set. | Training Dataset | Macro-F1 | Accuracy | ------ | ------ |------ | | SENTIPOLC16 | 0.80 | 0.81 | | FEEL-IT | **0.81** | **0.84** | | FEEL-IT+SentiPolc | 0.81 | 0.82 ## Usage ```python from transformers import pipeline classifier = pipeline("text-classification",model='MilaNLProc/feel-it-italian-sentiment',top_k=2) prediction = classifier("Oggi sono proprio contento!") print(prediction) ``` ## Citation Please use the following bibtex entry if you use this model in your project: ``` @inproceedings{bianchi2021feel, title = {{"FEEL-IT: Emotion and Sentiment Classification for the Italian Language"}}, author = "Bianchi, Federico and Nozza, Debora and Hovy, Dirk", booktitle = "Proceedings of the 11th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", year = "2021", publisher = "Association for Computational Linguistics", } ```
{"language": "it", "tags": ["sentiment", "Italian"]}
MilaNLProc/feel-it-italian-sentiment
null
[ "transformers", "pytorch", "tf", "camembert", "text-classification", "sentiment", "Italian", "it", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #tf #camembert #text-classification #sentiment #Italian #it #autotrain_compatible #endpoints_compatible #has_space #region-us
FEEL-IT: Emotion and Sentiment Classification for the Italian Language ====================================================================== FEEL-IT Python Package ---------------------- You can find the package that uses this model for emotion and sentiment classification here it is meant to be a very simple interface over HuggingFace models. License ------- Users should refer to the following license Abstract -------- Sentiment analysis is a common task to understand people's reactions online. Still, we often need more nuanced information: is the post negative because the user is angry or because they are sad? An abundance of approaches has been introduced for tackling both tasks. However, at least for Italian, they all treat only one of the tasks at a time. We introduce *FEEL-IT*, a novel benchmark corpus of Italian Twitter posts annotated with four basic emotions: anger, fear, joy, sadness. By collapsing them, we can also do sentiment analysis. We evaluate our corpus on benchmark datasets for both emotion and sentiment classification, obtaining competitive results. We release an open-source Python library, so researchers can use a model trained on FEEL-IT for inferring both sentiments and emotions from Italian text. Model ----- The *feel-it-italian-sentiment* model performs sentiment analysis on Italian. We fine-tuned the UmBERTo model on our new dataset (i.e., FEEL-IT) obtaining state-of-the-art performances on different benchmark corpora. Data ---- Our data has been collected by annotating tweets from a broad range of topics. In total, we have 2037 tweets annotated with an emotion label. More details can be found in our paper (URL Performance ----------- We evaluate our performance using SENTIPOLC16 Evalita. We collapsed the FEEL-IT classes into 2 by mapping joy to the *positive* class and anger, fear and sadness into the *negative* class. We compare three different experimental configurations training on FEEL-IT, SENTIPOLC16, or both by testing on the SENTIPOLC16 test set. The results show that training on FEEL-IT can provide better results on the SENTIPOLC16 test set than those that can be obtained with the SENTIPOLC16 training set. Training Dataset: SENTIPOLC16, Macro-F1: 0.80, Accuracy: 0.81 Training Dataset: FEEL-IT, Macro-F1: 0.81, Accuracy: 0.84 Training Dataset: FEEL-IT+SentiPolc, Macro-F1: 0.81, Accuracy: 0.82 Usage ----- Please use the following bibtex entry if you use this model in your project:
[]
[ "TAGS\n#transformers #pytorch #tf #camembert #text-classification #sentiment #Italian #it #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-generation
transformers
# Slovak GPT-J-1.4B Slovak GPT-J-1.4B with the whopping `1,415,283,792` parameters is the latest and the largest model released in Slovak GPT-J series. Smaller variants, [Slovak GPT-J-405M](https://huggingface.co/Milos/slovak-gpt-j-405M) and [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M), are still available. ## Model Description Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 1.4B trainable parameters. <figure> | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 1,415,283,792 | | \\(n_{layers}\\) | 24 | | \\(d_{model}\\) | 2048 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3&dagger;) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <p><strong>&dagger;</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure> ## Training data Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information. ## Training procedure This model was trained for a bit more than 26.5 billion tokens over 48,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.657`. ## Intended Use Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-1.4B") model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-1.4B") ``` When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`. 2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`. 3. In case of a new line always enter `\n\n` instead of a single `\n` To illustrate an example of a basic text generation: ``` >>> prompt = "Tradičné jedlo na Orave sú" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input) >>> tokenizer.decode(output[0]) 'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov' ``` ### Capabilities, Limitations, and Biases Based on the parameter count and our internal testing this should be by far the most capable model out of all models released in Slovak GPT-J series. For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines in the examples that follow. These were generated with both `top_p` and `temperature` set to `0.9`. A spin on "Mor ho!" by Samo Chalpuka ``` PROMPT Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu. Duní Dunaj a luna za lunou sa valí OUTPUT Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu. Duní Dunaj a luna za lunou sa valí: nad ním svieti pevný hrad na vysokom bralí. Pod tým hradom je zem slovenská, na nej je Slovensko naše. Na tom vrchu leží slovenský národ, pod tým bralom leží krajina slovenská. A pod tým hradom leží i jeho sláva: na tom vrchu, na tom brale leží sláva slovenského národa.``` ``` Jokes are a bit hit or miss. Very often they contain vulgarisms and/or are highly politically incorrect. This one is rather exception than the rule. ``` PROMPT Nasledujú najlepšie vtipy. Prečo OUTPUT Nasledujú najlepšie vtipy. Prečo má policajt pri sebe vždy dva nože? Jeden je na sebaobranu, druhý na boj zblízka. ``` The infamous unicorn prompt from GPT-2 paper. ``` PROMPT V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky. OUTPUT V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky. V Andách ich totiž žilo približne 700 jedincov, ktoré sa živili výhradne materským mliekom. Slováci sú tak pravdepodobne prvými Európanmi, ktorí v Andách stretli jednorožca. "Je to dôkaz, že tieto zvieratá sú inteligentné a že žijú v Andách už stovky rokov," povedal pre webový portál televízie JOJ profesor geológie, geografie a zoológie, Milan Kováč. Podľa profesora Kováča si v Andách zvieratá vytvárajú svoj vlastný jazyk. Je to zároveň dôkaz, že jednorožce žili v minulosti aj v slovenských pohoriach. "Jednorožce sa tam síce vyskytovali, ale neboli tak dobre preskúmané, ako teraz v Andách." Na Slovensku však ľudia o jednorožcoch donedávna vedeli veľmi málo.<|endoftext|> ``` Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: ``` >>> prompt = "Věta nesmí být sprostá a musí být zcela" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input, max_length=16) >>> tokenizer.decode(output[0]) 'Věta nesmí být sprostá a musí být zcela pravdivá.' ``` ## Citation and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :) If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile. ### BibTeX entry To cite this model: ```bibtex @misc{slovak-gpt-j-1.4B, author = {Kondela, Milos}, title = {{Slovak GPT-J-1.4B}}, howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-1.4B}}, year = 2022, month = February } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
{"language": ["sk"], "license": "gpl-3.0", "tags": ["Slovak GPT-J", "pytorch", "causal-lm"]}
Milos/slovak-gpt-j-1.4B
null
[ "transformers", "pytorch", "gptj", "text-generation", "Slovak GPT-J", "causal-lm", "sk", "arxiv:2104.09864", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.09864" ]
[ "sk" ]
TAGS #transformers #pytorch #gptj #text-generation #Slovak GPT-J #causal-lm #sk #arxiv-2104.09864 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Slovak GPT-J-1.4B ================= Slovak GPT-J-1.4B with the whopping '1,415,283,792' parameters is the latest and the largest model released in Slovak GPT-J series. Smaller variants, Slovak GPT-J-405M and Slovak GPT-J-162M, are still available. Model Description ----------------- Model is based on GPT-J and has over 1.4B trainable parameters. **†** ByteLevelBPETokenizer was trained on the same Slovak corpus. Training data ------------- Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information. Training procedure ------------------ This model was trained for a bit more than 26.5 billion tokens over 48,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was '2.657'. Intended Use ------------ Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the 'AutoModelForCausalLM' functionality: When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after 'slovenčinu') and "Mám rád slovenčinu " (trailing space after 'slovenčinu'), i.e '[12805, 2872, 46878]' != '[12805, 2872, 46878, 221]'. 2. Always use good ol' US English primary double quotation marks, i.e. '""' instead of '„“'. 3. In case of a new line always enter '\n\n' instead of a single '\n' To illustrate an example of a basic text generation: ### Capabilities, Limitations, and Biases Based on the parameter count and our internal testing this should be by far the most capable model out of all models released in Slovak GPT-J series. For sake of simplicity, I have omitted all the boilerplate code and swapped '\n' for new lines in the examples that follow. These were generated with both 'top\_p' and 'temperature' set to '0.9'. A spin on "Mor ho!" by Samo Chalpuka PROMPT Nasledujú najlepšie vtipy. Prečo OUTPUT Nasledujú najlepšie vtipy. Prečo má policajt pri sebe vždy dva nože? Jeden je na sebaobranu, druhý na boj zblízka. PROMPT V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky. OUTPUT V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky. V Andách ich totiž žilo približne 700 jedincov, ktoré sa živili výhradne materským mliekom. Slováci sú tak pravdepodobne prvými Európanmi, ktorí v Andách stretli jednorožca. "Je to dôkaz, že tieto zvieratá sú inteligentné a že žijú v Andách už stovky rokov," povedal pre webový portál televízie JOJ profesor geológie, geografie a zoológie, Milan Kováč. Podľa profesora Kováča si v Andách zvieratá vytvárajú svoj vlastný jazyk. Je to zároveň dôkaz, že jednorožce žili v minulosti aj v slovenských pohoriach. "Jednorožce sa tam síce vyskytovali, ale neboli tak dobre preskúmané, ako teraz v Andách." Na Slovensku však ľudia o jednorožcoch donedávna vedeli veľmi málo.<|endoftext|> > > > > > > > > > > > > prompt = "Věta nesmí být sprostá a musí být zcela" > > > encoded\_input = tokenizer(prompt, return\_tensors='pt') > > > output = model.generate(encoded\_input, max\_length=16) > > > URL(output[0]) > > > 'Věta nesmí být sprostá a musí být zcela pravdivá.' > > > bibtex > > > @misc{slovak-gpt-j-1.4B, > > > author = {Kondela, Milos}, > > > title = {{Slovak GPT-J-1.4B}}, > > > howpublished = {\url{URL > > > year = 2022, > > > month = February > > > } > > > bibtex > > > @misc{mesh-transformer-jax, > > > author = {Wang, Ben}, > > > title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, > > > howpublished = {\url{URL > > > year = 2021, > > > month = May > > > } > > > ''' > > > > > > > > > > > > > > > > > > Acknowledgements ---------------- This project was generously supported by TPU Research Cloud (TRC) program. Shoutout also goes to Ben Wang and great EleutherAI community.
[ "### How to use\n\n\nThis model along with the tokenizer can be easily loaded using the 'AutoModelForCausalLM' functionality:\n\n\nWhen generating a prompt keep in mind these three things, and you should be good to go:\n\n\n1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes \"Mám rád slovenčinu\" (no space after 'slovenčinu') and \"Mám rád slovenčinu \" (trailing space after 'slovenčinu'), i.e '[12805, 2872, 46878]' != '[12805, 2872, 46878, 221]'.\n2. Always use good ol' US English primary double quotation marks, i.e. '\"\"' instead of '„“'.\n3. In case of a new line always enter '\\n\\n' instead of a single '\\n'\n\n\nTo illustrate an example of a basic text generation:", "### Capabilities, Limitations, and Biases\n\n\nBased on the parameter count and our internal testing this should be by far the most capable model out of all models released in Slovak GPT-J series.\nFor sake of simplicity, I have omitted all the boilerplate code and swapped '\\n' for new lines in the examples that follow. These were generated with both 'top\\_p' and 'temperature' set to '0.9'.\n\n\nA spin on \"Mor ho!\" by Samo Chalpuka\n\n\nPROMPT\nNasledujú najlepšie vtipy.\n\n\nPrečo\nOUTPUT\nNasledujú najlepšie vtipy.\n\n\nPrečo má policajt pri sebe vždy dva nože? Jeden je na sebaobranu, druhý na boj zblízka.\n\n\nPROMPT\nV šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky.\n\n\nOUTPUT\nV šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky.\n\n\nV Andách ich totiž žilo približne 700 jedincov, ktoré sa živili výhradne materským mliekom. Slováci sú tak pravdepodobne prvými Európanmi, ktorí v Andách stretli jednorožca. \"Je to dôkaz, že tieto zvieratá sú inteligentné a že žijú v Andách už stovky rokov,\" povedal pre webový portál televízie JOJ profesor geológie, geografie a zoológie, Milan Kováč.\n\n\nPodľa profesora Kováča si v Andách zvieratá vytvárajú svoj vlastný jazyk. Je to zároveň dôkaz, že jednorožce žili v minulosti aj v slovenských pohoriach. \"Jednorožce sa tam síce vyskytovali, ale neboli tak dobre preskúmané, ako teraz v Andách.\"\n\n\nNa Slovensku však ľudia o jednorožcoch donedávna vedeli veľmi málo.<|endoftext|>\n\n\n\n> \n> \n> > \n> > \n> > > \n> > > prompt = \"Věta nesmí být sprostá a musí být zcela\"\n> > > encoded\\_input = tokenizer(prompt, return\\_tensors='pt')\n> > > output = model.generate(encoded\\_input, max\\_length=16)\n> > > URL(output[0])\n> > > 'Věta nesmí být sprostá a musí být zcela pravdivá.'\n> > > bibtex\n> > > @misc{slovak-gpt-j-1.4B,\n> > > author = {Kondela, Milos},\n> > > title = {{Slovak GPT-J-1.4B}},\n> > > howpublished = {\\url{URL\n> > > year = 2022,\n> > > month = February\n> > > }\n> > > bibtex\n> > > @misc{mesh-transformer-jax,\n> > > author = {Wang, Ben},\n> > > title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},\n> > > howpublished = {\\url{URL\n> > > year = 2021,\n> > > month = May\n> > > }\n> > > '''\n> > > \n> > > \n> > > \n> > \n> > \n> > \n> \n> \n> \n\n\nAcknowledgements\n----------------\n\n\nThis project was generously supported by TPU Research Cloud (TRC) program. Shoutout also goes to Ben Wang and great EleutherAI community." ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #Slovak GPT-J #causal-lm #sk #arxiv-2104.09864 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nThis model along with the tokenizer can be easily loaded using the 'AutoModelForCausalLM' functionality:\n\n\nWhen generating a prompt keep in mind these three things, and you should be good to go:\n\n\n1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes \"Mám rád slovenčinu\" (no space after 'slovenčinu') and \"Mám rád slovenčinu \" (trailing space after 'slovenčinu'), i.e '[12805, 2872, 46878]' != '[12805, 2872, 46878, 221]'.\n2. Always use good ol' US English primary double quotation marks, i.e. '\"\"' instead of '„“'.\n3. In case of a new line always enter '\\n\\n' instead of a single '\\n'\n\n\nTo illustrate an example of a basic text generation:", "### Capabilities, Limitations, and Biases\n\n\nBased on the parameter count and our internal testing this should be by far the most capable model out of all models released in Slovak GPT-J series.\nFor sake of simplicity, I have omitted all the boilerplate code and swapped '\\n' for new lines in the examples that follow. These were generated with both 'top\\_p' and 'temperature' set to '0.9'.\n\n\nA spin on \"Mor ho!\" by Samo Chalpuka\n\n\nPROMPT\nNasledujú najlepšie vtipy.\n\n\nPrečo\nOUTPUT\nNasledujú najlepšie vtipy.\n\n\nPrečo má policajt pri sebe vždy dva nože? Jeden je na sebaobranu, druhý na boj zblízka.\n\n\nPROMPT\nV šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky.\n\n\nOUTPUT\nV šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky.\n\n\nV Andách ich totiž žilo približne 700 jedincov, ktoré sa živili výhradne materským mliekom. Slováci sú tak pravdepodobne prvými Európanmi, ktorí v Andách stretli jednorožca. \"Je to dôkaz, že tieto zvieratá sú inteligentné a že žijú v Andách už stovky rokov,\" povedal pre webový portál televízie JOJ profesor geológie, geografie a zoológie, Milan Kováč.\n\n\nPodľa profesora Kováča si v Andách zvieratá vytvárajú svoj vlastný jazyk. Je to zároveň dôkaz, že jednorožce žili v minulosti aj v slovenských pohoriach. \"Jednorožce sa tam síce vyskytovali, ale neboli tak dobre preskúmané, ako teraz v Andách.\"\n\n\nNa Slovensku však ľudia o jednorožcoch donedávna vedeli veľmi málo.<|endoftext|>\n\n\n\n> \n> \n> > \n> > \n> > > \n> > > prompt = \"Věta nesmí být sprostá a musí být zcela\"\n> > > encoded\\_input = tokenizer(prompt, return\\_tensors='pt')\n> > > output = model.generate(encoded\\_input, max\\_length=16)\n> > > URL(output[0])\n> > > 'Věta nesmí být sprostá a musí být zcela pravdivá.'\n> > > bibtex\n> > > @misc{slovak-gpt-j-1.4B,\n> > > author = {Kondela, Milos},\n> > > title = {{Slovak GPT-J-1.4B}},\n> > > howpublished = {\\url{URL\n> > > year = 2022,\n> > > month = February\n> > > }\n> > > bibtex\n> > > @misc{mesh-transformer-jax,\n> > > author = {Wang, Ben},\n> > > title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},\n> > > howpublished = {\\url{URL\n> > > year = 2021,\n> > > month = May\n> > > }\n> > > '''\n> > > \n> > > \n> > > \n> > \n> > \n> > \n> \n> \n> \n\n\nAcknowledgements\n----------------\n\n\nThis project was generously supported by TPU Research Cloud (TRC) program. Shoutout also goes to Ben Wang and great EleutherAI community." ]
text-generation
transformers
# Slovak GPT-J-162M Slovak GPT-J-162M is the first model released in Slovak GPT-J series and the very first publicly available transformer trained predominantly on Slovak corpus. Since the initial release two other models were made public, [Slovak GPT-J-405M](https://huggingface.co/Milos/slovak-gpt-j-405M) and the largest [Slovak GPT-J-1.4B](https://huggingface.co/Milos/slovak-gpt-j-1.4B). ## Model Description Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 162M trainable parameters. <figure> | Hyperparameter | Value | |----------------------|-------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 162,454,608 | | \\(n_{layers}\\) | 12 | | \\(d_{model}\\) | 768 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3&dagger;) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <p><strong>&dagger;</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure> ## Training data Slovak GPT-J-162M was trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate parts of the corpus, the model still might generate sensitive content or leak sensitive information. ## Training procedure This model was trained for almost 37 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was 3.065. ## Intended Use Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-162M") model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-162M") ``` When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`. 2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`. 3. In case of a new line always enter `\n\n` instead of a single `\n` To illustrate an example of a basic text generation: ``` >>> prompt = "Moje najobľubenejšie mesto na severe Slovenska je" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input) >>> tokenizer.decode(output[0]) 'Moje najobľubenejšie mesto na severe Slovenska je Žilina.\n\nV Žiline sa nachádza množstvo zaujímavých miest' ``` ### Capabilities, Limitations, and Biases First and foremost, the capability of this particular model is very limited due to its relatively small size totalling only 162M parameters, hence the intended use of this particular model is to educate and have fun! :) Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: ``` >>> prompt = "Věta nesmí být sprostá a musí být zcela" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input, max_length=16) >>> tokenizer.decode(output[0]) 'Věta nesmí být sprostá a musí být zcela věrná.' ``` ## Citation and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now. Based on the popularity and interest in this model I might release _substantially_ larger versions of Slovak GPT-J models that are way more capable. If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile. ### BibTeX entry To cite this model: ```bibtex @misc{slovak-gpt-j-162m, author = {Kondela, Milos}, title = {{Slovak GPT-J-162M}}, howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-162M}}, year = 2022, month = February } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
{"language": ["sk"], "license": "gpl-3.0", "tags": ["Slovak GPT-J", "pytorch", "causal-lm"]}
Milos/slovak-gpt-j-162M
null
[ "transformers", "pytorch", "gptj", "text-generation", "Slovak GPT-J", "causal-lm", "sk", "arxiv:2104.09864", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.09864" ]
[ "sk" ]
TAGS #transformers #pytorch #gptj #text-generation #Slovak GPT-J #causal-lm #sk #arxiv-2104.09864 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
Slovak GPT-J-162M ================= Slovak GPT-J-162M is the first model released in Slovak GPT-J series and the very first publicly available transformer trained predominantly on Slovak corpus. Since the initial release two other models were made public, Slovak GPT-J-405M and the largest Slovak GPT-J-1.4B. Model Description ----------------- Model is based on GPT-J and has over 162M trainable parameters. **†** ByteLevelBPETokenizer was trained on the same Slovak corpus. Training data ------------- Slovak GPT-J-162M was trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate parts of the corpus, the model still might generate sensitive content or leak sensitive information. Training procedure ------------------ This model was trained for almost 37 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was 3.065. Intended Use ------------ Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the 'AutoModelForCausalLM' functionality: When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after 'slovenčinu') and "Mám rád slovenčinu " (trailing space after 'slovenčinu'), i.e '[12805, 2872, 46878]' != '[12805, 2872, 46878, 221]'. 2. Always use good ol' US English primary double quotation marks, i.e. '""' instead of '„“'. 3. In case of a new line always enter '\n\n' instead of a single '\n' To illustrate an example of a basic text generation: ### Capabilities, Limitations, and Biases First and foremost, the capability of this particular model is very limited due to its relatively small size totalling only 162M parameters, hence the intended use of this particular model is to educate and have fun! :) Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now. Based on the popularity and interest in this model I might release *substantially* larger versions of Slovak GPT-J models that are way more capable. If you use this model or have any questions about it feel free to hit me up at twitter or check out my github profile. ### BibTeX entry To cite this model: To cite the codebase that trained this model: Acknowledgements ---------------- This project was generously supported by TPU Research Cloud (TRC) program. Shoutout also goes to Ben Wang and great EleutherAI community.
[ "### How to use\n\n\nThis model along with the tokenizer can be easily loaded using the 'AutoModelForCausalLM' functionality:\n\n\nWhen generating a prompt keep in mind these three things, and you should be good to go:\n\n\n1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes \"Mám rád slovenčinu\" (no space after 'slovenčinu') and \"Mám rád slovenčinu \" (trailing space after 'slovenčinu'), i.e '[12805, 2872, 46878]' != '[12805, 2872, 46878, 221]'.\n2. Always use good ol' US English primary double quotation marks, i.e. '\"\"' instead of '„“'.\n3. In case of a new line always enter '\\n\\n' instead of a single '\\n'\n\n\nTo illustrate an example of a basic text generation:", "### Capabilities, Limitations, and Biases\n\n\nFirst and foremost, the capability of this particular model is very limited due to its relatively small size totalling only 162M parameters, hence the intended use of this particular model is to educate and have fun! :)\n\n\nSince the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech:\n\n\nand Related Information\n\n\nThis was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now. Based on the popularity and interest in this model I might release *substantially* larger versions of Slovak GPT-J models that are way more capable.\n\n\nIf you use this model or have any questions about it feel free to hit me up at twitter or check out my github profile.", "### BibTeX entry\n\n\nTo cite this model:\n\n\nTo cite the codebase that trained this model:\n\n\nAcknowledgements\n----------------\n\n\nThis project was generously supported by TPU Research Cloud (TRC) program. Shoutout also goes to Ben Wang and great EleutherAI community." ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #Slovak GPT-J #causal-lm #sk #arxiv-2104.09864 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nThis model along with the tokenizer can be easily loaded using the 'AutoModelForCausalLM' functionality:\n\n\nWhen generating a prompt keep in mind these three things, and you should be good to go:\n\n\n1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes \"Mám rád slovenčinu\" (no space after 'slovenčinu') and \"Mám rád slovenčinu \" (trailing space after 'slovenčinu'), i.e '[12805, 2872, 46878]' != '[12805, 2872, 46878, 221]'.\n2. Always use good ol' US English primary double quotation marks, i.e. '\"\"' instead of '„“'.\n3. In case of a new line always enter '\\n\\n' instead of a single '\\n'\n\n\nTo illustrate an example of a basic text generation:", "### Capabilities, Limitations, and Biases\n\n\nFirst and foremost, the capability of this particular model is very limited due to its relatively small size totalling only 162M parameters, hence the intended use of this particular model is to educate and have fun! :)\n\n\nSince the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech:\n\n\nand Related Information\n\n\nThis was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now. Based on the popularity and interest in this model I might release *substantially* larger versions of Slovak GPT-J models that are way more capable.\n\n\nIf you use this model or have any questions about it feel free to hit me up at twitter or check out my github profile.", "### BibTeX entry\n\n\nTo cite this model:\n\n\nTo cite the codebase that trained this model:\n\n\nAcknowledgements\n----------------\n\n\nThis project was generously supported by TPU Research Cloud (TRC) program. Shoutout also goes to Ben Wang and great EleutherAI community." ]
text-generation
transformers
# Slovak GPT-J-405M Slovak GPT-J-405M is the second model released in Slovak GPT-J series after its smaller variant [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M). Since then a larger [Slovak GPT-J-1.4B](https://huggingface.co/Milos/slovak-gpt-j-1.4B) was released. ## Model Description Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 405M trainable parameters. <figure> | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 405,677,136 | | \\(n_{layers}\\) | 24 | | \\(d_{model}\\) | 1024 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3&dagger;) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <p><strong>&dagger;</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure> ## Training data Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information. ## Training procedure This model was trained for a bit more than 36.5 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.821`. ## Intended Use Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-405M") model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-405M") ``` When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`. 2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`. 3. In case of a new line always enter `\n\n` instead of a single `\n` To illustrate an example of a basic text generation: ``` >>> prompt = "Tradičné jedlo na Orave sú" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input) >>> tokenizer.decode(output[0]) 'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov' ``` ### Capabilities, Limitations, and Biases The capability of this particular model is somewhat decent despite its small size totalling 405M parameters. With relative ease it can manage to generate interesting and grammatically correct content. For example, you can try few of the following prompts. (For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines). Try generating "How to" articles: ``` PROMPT Ako napísať pôsobivú esej krok za krokom OUTPUT Ako napísať pôsobivú esej krok za krokom V tomto článku sa dozviete, ako napísať esej, ktorá bude mať v sebe niečo, čo zaujme a bude sa vám páčiť. V prvom rade si musíte uvedomiť, že esej je vlastne písaný text, ktorý má byť napísaný na počítači.' ``` However, relying on the model to produce factually correct information isn't recommended. Or try to generate chat conversations: ``` PROMPT Peter: Ako sa mas? Lucia: Ale celkom dobre. Co si robil? Peter: OUTPUT Peter: Ako sa mas? Lucia: Ale celkom dobre. Co si robil? Peter: No, bol som na chate. Lucia: A co si tam robil? Peter: No, bol som tam s kamošmi. Lucia: A co si tam robil? Peter: No, bol som tam s kamošmi. ``` Apparently either Peters are more likely to act suspiciously in this particular context or there's a problem with the model. Let's assume that the second explanation may hold some merit. In general, GPT models can (and often will) get into a repeating cycle of generating same content. This is a common problem beyond the scope of this README, however, see [generate's documentation](https://huggingface.co/docs/transformers/master/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate) on how to introduce a frequency/repetition penalty. Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: ``` >>> prompt = "Věta nesmí být sprostá a musí být zcela" >>> encoded_input = tokenizer(prompt, return_tensors='pt') >>> output = model.generate(**encoded_input, max_length=16) >>> tokenizer.decode(output[0]) 'Věta nesmí být sprostá a musí být zcela pravdivá.' ``` ## Citation and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :) If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile. ### BibTeX entry To cite this model: ```bibtex @misc{slovak-gpt-j-405m, author = {Kondela, Milos}, title = {{Slovak GPT-J-405M}}, howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-405M}}, year = 2022, month = February } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).
{"language": ["sk"], "license": "gpl-3.0", "tags": ["Slovak GPT-J", "pytorch", "causal-lm"]}
Milos/slovak-gpt-j-405M
null
[ "transformers", "pytorch", "gptj", "text-generation", "Slovak GPT-J", "causal-lm", "sk", "arxiv:2104.09864", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.09864" ]
[ "sk" ]
TAGS #transformers #pytorch #gptj #text-generation #Slovak GPT-J #causal-lm #sk #arxiv-2104.09864 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
Slovak GPT-J-405M ================= Slovak GPT-J-405M is the second model released in Slovak GPT-J series after its smaller variant Slovak GPT-J-162M. Since then a larger Slovak GPT-J-1.4B was released. Model Description ----------------- Model is based on GPT-J and has over 405M trainable parameters. **†** ByteLevelBPETokenizer was trained on the same Slovak corpus. Training data ------------- Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model. The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use] section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information. Training procedure ------------------ This model was trained for a bit more than 36.5 billion tokens over 69,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was '2.821'. Intended Use ------------ Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt. ### How to use This model along with the tokenizer can be easily loaded using the 'AutoModelForCausalLM' functionality: When generating a prompt keep in mind these three things, and you should be good to go: 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after 'slovenčinu') and "Mám rád slovenčinu " (trailing space after 'slovenčinu'), i.e '[12805, 2872, 46878]' != '[12805, 2872, 46878, 221]'. 2. Always use good ol' US English primary double quotation marks, i.e. '""' instead of '„“'. 3. In case of a new line always enter '\n\n' instead of a single '\n' To illustrate an example of a basic text generation: ### Capabilities, Limitations, and Biases The capability of this particular model is somewhat decent despite its small size totalling 405M parameters. With relative ease it can manage to generate interesting and grammatically correct content. For example, you can try few of the following prompts. (For sake of simplicity, I have omitted all the boilerplate code and swapped '\n' for new lines). Try generating "How to" articles: However, relying on the model to produce factually correct information isn't recommended. Or try to generate chat conversations: Apparently either Peters are more likely to act suspiciously in this particular context or there's a problem with the model. Let's assume that the second explanation may hold some merit. In general, GPT models can (and often will) get into a repeating cycle of generating same content. This is a common problem beyond the scope of this README, however, see generate's documentation on how to introduce a frequency/repetition penalty. Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech: and Related Information This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :) If you use this model or have any questions about it feel free to hit me up at twitter or check out my github profile. ### BibTeX entry To cite this model: To cite the codebase that trained this model: Acknowledgements ---------------- This project was generously supported by TPU Research Cloud (TRC) program. Shoutout also goes to Ben Wang and great EleutherAI community.
[ "### How to use\n\n\nThis model along with the tokenizer can be easily loaded using the 'AutoModelForCausalLM' functionality:\n\n\nWhen generating a prompt keep in mind these three things, and you should be good to go:\n\n\n1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes \"Mám rád slovenčinu\" (no space after 'slovenčinu') and \"Mám rád slovenčinu \" (trailing space after 'slovenčinu'), i.e '[12805, 2872, 46878]' != '[12805, 2872, 46878, 221]'.\n2. Always use good ol' US English primary double quotation marks, i.e. '\"\"' instead of '„“'.\n3. In case of a new line always enter '\\n\\n' instead of a single '\\n'\n\n\nTo illustrate an example of a basic text generation:", "### Capabilities, Limitations, and Biases\n\n\nThe capability of this particular model is somewhat decent despite its small size totalling 405M parameters. With relative ease it can manage to generate interesting and grammatically correct content.\nFor example, you can try few of the following prompts. (For sake of simplicity, I have omitted all the boilerplate code and swapped '\\n' for new lines).\n\n\nTry generating \"How to\" articles:\n\n\nHowever, relying on the model to produce factually correct information isn't recommended.\n\n\nOr try to generate chat conversations:\n\n\nApparently either Peters are more likely to act suspiciously in this particular context or there's a problem with the model. Let's assume that the second explanation may hold some merit. In general, GPT models can (and often will) get into a repeating cycle of generating same content. This is a common problem beyond the scope of this README, however, see generate's documentation on how to introduce a frequency/repetition penalty.\n\n\nSince the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech:\n\n\nand Related Information\n\n\nThis was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :)\n\n\nIf you use this model or have any questions about it feel free to hit me up at twitter or check out my github profile.", "### BibTeX entry\n\n\nTo cite this model:\n\n\nTo cite the codebase that trained this model:\n\n\nAcknowledgements\n----------------\n\n\nThis project was generously supported by TPU Research Cloud (TRC) program. Shoutout also goes to Ben Wang and great EleutherAI community." ]
[ "TAGS\n#transformers #pytorch #gptj #text-generation #Slovak GPT-J #causal-lm #sk #arxiv-2104.09864 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nThis model along with the tokenizer can be easily loaded using the 'AutoModelForCausalLM' functionality:\n\n\nWhen generating a prompt keep in mind these three things, and you should be good to go:\n\n\n1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes \"Mám rád slovenčinu\" (no space after 'slovenčinu') and \"Mám rád slovenčinu \" (trailing space after 'slovenčinu'), i.e '[12805, 2872, 46878]' != '[12805, 2872, 46878, 221]'.\n2. Always use good ol' US English primary double quotation marks, i.e. '\"\"' instead of '„“'.\n3. In case of a new line always enter '\\n\\n' instead of a single '\\n'\n\n\nTo illustrate an example of a basic text generation:", "### Capabilities, Limitations, and Biases\n\n\nThe capability of this particular model is somewhat decent despite its small size totalling 405M parameters. With relative ease it can manage to generate interesting and grammatically correct content.\nFor example, you can try few of the following prompts. (For sake of simplicity, I have omitted all the boilerplate code and swapped '\\n' for new lines).\n\n\nTry generating \"How to\" articles:\n\n\nHowever, relying on the model to produce factually correct information isn't recommended.\n\n\nOr try to generate chat conversations:\n\n\nApparently either Peters are more likely to act suspiciously in this particular context or there's a problem with the model. Let's assume that the second explanation may hold some merit. In general, GPT models can (and often will) get into a repeating cycle of generating same content. This is a common problem beyond the scope of this README, however, see generate's documentation on how to introduce a frequency/repetition penalty.\n\n\nSince the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech:\n\n\nand Related Information\n\n\nThis was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free time to open source it properly, so it all sat on my hard drive until now :)\n\n\nIf you use this model or have any questions about it feel free to hit me up at twitter or check out my github profile.", "### BibTeX entry\n\n\nTo cite this model:\n\n\nTo cite the codebase that trained this model:\n\n\nAcknowledgements\n----------------\n\n\nThis project was generously supported by TPU Research Cloud (TRC) program. Shoutout also goes to Ben Wang and great EleutherAI community." ]
text2text-generation
transformers
# RuT5Tox
{"language": ["ru"], "license": ["apache-2.0"], "tags": ["t5"], "inference": {"parameters": {"num_beams": 5, "no_repeat_ngram_size": 4}}, "widget": [{"text": "\u0427\u0442\u043e \u044d\u0442\u043e \u0437\u0430 \u0435\u0440\u0443\u043d\u0434\u0430?"}]}
IlyaGusev/rut5_tox
null
[ "transformers", "pytorch", "t5", "text2text-generation", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ru" ]
TAGS #transformers #pytorch #t5 #text2text-generation #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# RuT5Tox
[ "# RuT5Tox" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# RuT5Tox" ]
text2text-generation
transformers
[DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization](https://arxiv.org/abs/2109.02492). ## Introduction DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a base version of DialogLED, the input length is limited to 16,384 in the pre-training phase. ## Finetuning for Downstream Tasks Please refer to [our GitHub page](https://github.com/microsoft/DialogLM).
{}
MingZhong/DialogLED-base-16384
null
[ "transformers", "pytorch", "led", "text2text-generation", "arxiv:2109.02492", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2109.02492" ]
[]
TAGS #transformers #pytorch #led #text2text-generation #arxiv-2109.02492 #autotrain_compatible #endpoints_compatible #region-us
DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization. ## Introduction DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a base version of DialogLED, the input length is limited to 16,384 in the pre-training phase. ## Finetuning for Downstream Tasks Please refer to our GitHub page.
[ "## Introduction\nDialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a base version of DialogLED, the input length is limited to 16,384 in the pre-training phase.", "## Finetuning for Downstream Tasks\nPlease refer to our GitHub page." ]
[ "TAGS\n#transformers #pytorch #led #text2text-generation #arxiv-2109.02492 #autotrain_compatible #endpoints_compatible #region-us \n", "## Introduction\nDialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a base version of DialogLED, the input length is limited to 16,384 in the pre-training phase.", "## Finetuning for Downstream Tasks\nPlease refer to our GitHub page." ]
text2text-generation
transformers
[DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization](https://arxiv.org/abs/2109.02492). ## Introduction DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a large version of DialogLED, the input length is limited to 5,120 in the pre-training phase. ## Finetuning for Downstream Tasks Please refer to [our GitHub page](https://github.com/microsoft/DialogLM).
{}
MingZhong/DialogLED-large-5120
null
[ "transformers", "pytorch", "led", "text2text-generation", "arxiv:2109.02492", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2109.02492" ]
[]
TAGS #transformers #pytorch #led #text2text-generation #arxiv-2109.02492 #autotrain_compatible #endpoints_compatible #region-us
DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization. ## Introduction DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a large version of DialogLED, the input length is limited to 5,120 in the pre-training phase. ## Finetuning for Downstream Tasks Please refer to our GitHub page.
[ "## Introduction\nDialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a large version of DialogLED, the input length is limited to 5,120 in the pre-training phase.", "## Finetuning for Downstream Tasks\nPlease refer to our GitHub page." ]
[ "TAGS\n#transformers #pytorch #led #text2text-generation #arxiv-2109.02492 #autotrain_compatible #endpoints_compatible #region-us \n", "## Introduction\nDialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a large version of DialogLED, the input length is limited to 5,120 in the pre-training phase.", "## Finetuning for Downstream Tasks\nPlease refer to our GitHub page." ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmp6tsjsfbf This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0178 - Train Sparse Categorical Accuracy: 0.9962 - Epoch: 49 ## Model description This model classifies the title of a content (e.g., YouTube video, article, or podcast episode) into 1 of 8 subjects 0. art 1. personal development 2. world 3. health 4. science 5. business 6. humanities 7. technology. This model is used to support [Sanderling](https://sanderling.app) ## Intended uses & limitations More information needed ## Training and evaluation data We used 1.5k labeled titles to train the model. Majority of the training dataset are English titles. The rest are Chinese titles. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:-----:| | 1.8005 | 0.3956 | 0 | | 1.3302 | 0.5916 | 1 | | 0.8998 | 0.7575 | 2 | | 0.6268 | 0.8468 | 3 | | 0.4239 | 0.9062 | 4 | | 0.2982 | 0.9414 | 5 | | 0.2245 | 0.9625 | 6 | | 0.1678 | 0.9730 | 7 | | 0.1399 | 0.9745 | 8 | | 0.1059 | 0.9827 | 9 | | 0.0822 | 0.9850 | 10 | | 0.0601 | 0.9902 | 11 | | 0.0481 | 0.9932 | 12 | | 0.0386 | 0.9955 | 13 | | 0.0292 | 0.9977 | 14 | | 0.0353 | 0.9940 | 15 | | 0.0336 | 0.9932 | 16 | | 0.0345 | 0.9910 | 17 | | 0.0179 | 0.9985 | 18 | | 0.0150 | 0.9985 | 19 | | 0.0365 | 0.9895 | 20 | | 0.0431 | 0.9895 | 21 | | 0.0243 | 0.9955 | 22 | | 0.0317 | 0.9925 | 23 | | 0.0375 | 0.9902 | 24 | | 0.0138 | 0.9970 | 25 | | 0.0159 | 0.9977 | 26 | | 0.0160 | 0.9962 | 27 | | 0.0151 | 0.9977 | 28 | | 0.0337 | 0.9902 | 29 | | 0.0119 | 0.9977 | 30 | | 0.0165 | 0.9955 | 31 | | 0.0133 | 0.9977 | 32 | | 0.0047 | 1.0 | 33 | | 0.0037 | 1.0 | 34 | | 0.0033 | 1.0 | 35 | | 0.0031 | 1.0 | 36 | | 0.0036 | 1.0 | 37 | | 0.0343 | 0.9887 | 38 | | 0.0234 | 0.9962 | 39 | | 0.0034 | 1.0 | 40 | | 0.0036 | 1.0 | 41 | | 0.0261 | 0.9917 | 42 | | 0.0111 | 0.9970 | 43 | | 0.0039 | 1.0 | 44 | | 0.0214 | 0.9932 | 45 | | 0.0044 | 0.9985 | 46 | | 0.0122 | 0.9985 | 47 | | 0.0119 | 0.9962 | 48 | | 0.0178 | 0.9962 | 49 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "tmp6tsjsfbf", "results": []}]}
Mingyi/classify_title_subject
null
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
tmp6tsjsfbf =========== This model is a fine-tuned version of bert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0178 * Train Sparse Categorical Accuracy: 0.9962 * Epoch: 49 Model description ----------------- This model classifies the title of a content (e.g., YouTube video, article, or podcast episode) into 1 of 8 subjects 0. art 1. personal development 2. world 3. health 4. science 5. business 6. humanities 7. technology. This model is used to support Sanderling Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- We used 1.5k labeled titles to train the model. Majority of the training dataset are English titles. The rest are Chinese titles. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'learning\_rate': 5e-06, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.15.0 * TensorFlow 2.7.0 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-06, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* TensorFlow 2.7.0\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': 5e-06, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* TensorFlow 2.7.0\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0596 - Precision: 0.9240 - Recall: 0.9378 - F1: 0.9308 - Accuracy: 0.9838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2381 | 1.0 | 878 | 0.0707 | 0.9100 | 0.9240 | 0.9170 | 0.9805 | | 0.0563 | 2.0 | 1756 | 0.0583 | 0.9246 | 0.9382 | 0.9314 | 0.9835 | | 0.03 | 3.0 | 2634 | 0.0596 | 0.9240 | 0.9378 | 0.9308 | 0.9838 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9239501818582607, "name": "Precision"}, {"type": "recall", "value": 0.9378006488421524, "name": "Recall"}, {"type": "f1", "value": 0.9308238951809905, "name": "F1"}, {"type": "accuracy", "value": 0.9837800054013695, "name": "Accuracy"}]}]}]}
Minowa/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-ner ===================================== This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set: * Loss: 0.0596 * Precision: 0.9240 * Recall: 0.9378 * F1: 0.9308 * Accuracy: 0.9838 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-ro-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset. It achieves the following results on the evaluation set: - Loss: 1.5877 - Bleu: 13.4499 - Gen Len: 17.5073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.6167 | 0.05 | 2000 | 1.8649 | 9.7029 | 17.5753 | | 1.4551 | 0.1 | 4000 | 1.7810 | 10.6382 | 17.5358 | | 1.3723 | 0.16 | 6000 | 1.7369 | 11.1285 | 17.5158 | | 1.3373 | 0.21 | 8000 | 1.7086 | 11.6173 | 17.5013 | | 1.2935 | 0.26 | 10000 | 1.6890 | 12.0641 | 17.5038 | | 1.2632 | 0.31 | 12000 | 1.6670 | 12.3012 | 17.5253 | | 1.2463 | 0.37 | 14000 | 1.6556 | 12.3991 | 17.5153 | | 1.2272 | 0.42 | 16000 | 1.6442 | 12.7392 | 17.4732 | | 1.2052 | 0.47 | 18000 | 1.6328 | 12.8446 | 17.5143 | | 1.1985 | 0.52 | 20000 | 1.6233 | 13.0892 | 17.4807 | | 1.1821 | 0.58 | 22000 | 1.6153 | 13.1529 | 17.4952 | | 1.1791 | 0.63 | 24000 | 1.6079 | 13.2964 | 17.5088 | | 1.1698 | 0.68 | 26000 | 1.6038 | 13.3548 | 17.4842 | | 1.154 | 0.73 | 28000 | 1.5957 | 13.3012 | 17.5053 | | 1.1634 | 0.79 | 30000 | 1.5931 | 13.4203 | 17.5083 | | 1.1487 | 0.84 | 32000 | 1.5893 | 13.3959 | 17.5123 | | 1.1495 | 0.89 | 34000 | 1.5875 | 13.3745 | 17.4902 | | 1.1458 | 0.94 | 36000 | 1.5877 | 13.4129 | 17.5043 | | 1.1465 | 1.0 | 38000 | 1.5877 | 13.4499 | 17.5073 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "t5-small-finetuned-ro-to-en", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 13.4499, "name": "Bleu"}]}]}]}
Mirelle/t5-small-finetuned-ro-to-en
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
t5-small-finetuned-ro-to-en =========================== This model is a fine-tuned version of t5-small on the wmt16 dataset. It achieves the following results on the evaluation set: * Loss: 1.5877 * Bleu: 13.4499 * Gen Len: 17.5073 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Datasets 1.16.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-finetuned This model is a fine-tuned version of [yhavinga/t5-v1.1-base-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cnn-test) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 1 | nan | 33.8462 | 31.746 | 30.7692 | 30.7692 | 86.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "test-finetuned", "results": []}]}
Mirjam/test-finetuned
null
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
test-finetuned ============== This model is a fine-tuned version of yhavinga/t5-v1.1-base-dutch-cnn-test on the None dataset. Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 3 * eval\_batch\_size: 3 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.15.0 * Pytorch 1.10.1 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 3\n* eval\\_batch\\_size: 3\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7134 - Matthews Correlation: 0.5411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5294 | 1.0 | 535 | 0.5082 | 0.4183 | | 0.3483 | 2.0 | 1070 | 0.4969 | 0.5259 | | 0.2355 | 3.0 | 1605 | 0.6260 | 0.5065 | | 0.1733 | 4.0 | 2140 | 0.7134 | 0.5411 | | 0.1238 | 5.0 | 2675 | 0.8516 | 0.5291 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.54109909504615, "name": "Matthews Correlation"}]}]}]}
MisbaHF/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.7134 * Matthews Correlation: 0.5411 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.12.3 * Pytorch 1.10.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-testingSB-testingSB This model is a fine-tuned version of [MistahCase/distilroberta-base-testingSB](https://huggingface.co/MistahCase/distilroberta-base-testingSB) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1463 | 1.0 | 1461 | 1.1171 | | 1.0188 | 2.0 | 2922 | 1.0221 | | 1.0016 | 3.0 | 4383 | 0.9870 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilroberta-base-testingSB-testingSB", "results": []}]}
MistahCase/distilroberta-base-testingSB-testingSB
null
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilroberta-base-testingSB-testingSB ====================================== This model is a fine-tuned version of MistahCase/distilroberta-base-testingSB on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.9870 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.20.0 * Pytorch 1.11.0+cu113 * Datasets 2.3.2 * Tokenizers 0.12.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.20.0\n* Pytorch 1.11.0+cu113\n* Datasets 2.3.2\n* Tokenizers 0.12.1" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.20.0\n* Pytorch 1.11.0+cu113\n* Datasets 2.3.2\n* Tokenizers 0.12.1" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-testingSB This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on a company specific, Danish dataset. It achieves the following results on the evaluation set: - Loss: 1.0403 ## Model description Customer-specific model used to embed asset management work orders in Danish ## Intended uses & limitations Customer-specific and trained for unsupervised categorization tasks ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results Epoch Training Loss Validation Loss 1 0.988500 1.056376 2 0.996300 1.027803 3 0.990300 1.040270 | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.98850 | 1.0 | 1461 | 1.5211 | | 1.3179 | 2.0 | 2922 | 1.3314 | | 1.1931 | 3.0 | 4383 | 1.2530 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilroberta-base-testingSB", "results": []}]}
MistahCase/distilroberta-base-testingSB
null
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilroberta-base-testingSB ============================ This model is a fine-tuned version of distilroberta-base on a company specific, Danish dataset. It achieves the following results on the evaluation set: * Loss: 1.0403 Model description ----------------- Customer-specific model used to embed asset management work orders in Danish Intended uses & limitations --------------------------- Customer-specific and trained for unsupervised categorization tasks Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results Epoch Training Loss Validation Loss 1 0.988500 1.056376 2 0.996300 1.027803 3 0.990300 1.040270 ### Framework versions * Transformers 4.12.5 * Pytorch 1.10.0+cu111 * Datasets 1.15.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results\n\n\nEpoch Training Loss Validation Loss\n1 0.988500 1.056376\n2 0.996300 1.027803\n3 0.990300 1.040270", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results\n\n\nEpoch Training Loss Validation Loss\n1 0.988500 1.056376\n2 0.996300 1.027803\n3 0.990300 1.040270", "### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
# Model Description This model is fine-tuning bert-base model on Cola dataset
{"language": "en", "license": "mit", "tags": ["sequence classification"], "datasets": ["cola"]}
Modfiededition/bert-fine-tuned-cola
null
[ "transformers", "tf", "bert", "text-classification", "sequence classification", "en", "dataset:cola", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #tf #bert #text-classification #sequence classification #en #dataset-cola #license-mit #autotrain_compatible #endpoints_compatible #region-us
# Model Description This model is fine-tuning bert-base model on Cola dataset
[ "# Model Description\nThis model is fine-tuning bert-base model on Cola dataset" ]
[ "TAGS\n#transformers #tf #bert #text-classification #sequence classification #en #dataset-cola #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Description\nThis model is fine-tuning bert-base model on Cola dataset" ]
text2text-generation
transformers
## t5-base-fine-tuned-on-jfleg T5-base model fine-tuned on the [**JFLEG dataset**](https://huggingface.co/datasets/jfleg) with the objective of **text2text-generation**. # Model Description: T5 is an encoder-decoder model pre-trained with a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. .T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: translate English to German: …, for summarization: summarize: …. The T5 model was presented in [**Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer**](https://arxiv.org/pdf/1910.10683.pdf) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. ## Pre-Processing: For this task of grammar correction, we’ll use the prefix “grammar: “ to each of the input sentences. ``` Grammar: Your Sentence ``` ## How to use : You can use this model directly with the pipeline for detecting and correcting grammatical mistakes. ``` from transformers import pipeline model_checkpoint = "Modfiededition/t5-base-fine-tuned-on-jfleg" model = pipeline("text2text-generation", model=model_checkpoint) text = "I am write on AI" output = model(text) ``` Result(s) ``` I am writing on AI. ```
{}
Modfiededition/t5-base-fine-tuned-on-jfleg
null
[ "transformers", "tf", "t5", "text2text-generation", "arxiv:1910.10683", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "1910.10683" ]
[]
TAGS #transformers #tf #t5 #text2text-generation #arxiv-1910.10683 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
## t5-base-fine-tuned-on-jfleg T5-base model fine-tuned on the JFLEG dataset with the objective of text2text-generation. # Model Description: T5 is an encoder-decoder model pre-trained with a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. .T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: translate English to German: …, for summarization: summarize: …. The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. ## Pre-Processing: For this task of grammar correction, we’ll use the prefix “grammar: “ to each of the input sentences. ## How to use : You can use this model directly with the pipeline for detecting and correcting grammatical mistakes. Result(s)
[ "## t5-base-fine-tuned-on-jfleg\nT5-base model fine-tuned on the JFLEG dataset with the objective of text2text-generation.", "# Model Description:\nT5 is an encoder-decoder model pre-trained with a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.\n.T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: translate English to German: …, for summarization: summarize: ….\n\nThe T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.", "## Pre-Processing:\nFor this task of grammar correction, we’ll use the prefix “grammar: “ to each of the input sentences.", "## How to use :\nYou can use this model directly with the pipeline for detecting and correcting grammatical mistakes.\n\n\nResult(s)" ]
[ "TAGS\n#transformers #tf #t5 #text2text-generation #arxiv-1910.10683 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n", "## t5-base-fine-tuned-on-jfleg\nT5-base model fine-tuned on the JFLEG dataset with the objective of text2text-generation.", "# Model Description:\nT5 is an encoder-decoder model pre-trained with a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.\n.T5 works well on a variety of tasks out-of-the-box by prepending a different prefix to the input corresponding to each task, e.g., for translation: translate English to German: …, for summarization: summarize: ….\n\nThe T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.", "## Pre-Processing:\nFor this task of grammar correction, we’ll use the prefix “grammar: “ to each of the input sentences.", "## How to use :\nYou can use this model directly with the pipeline for detecting and correcting grammatical mistakes.\n\n\nResult(s)" ]
text-generation
transformers
# Okabe Rintaro DialoGPT Model
{"tags": ["conversational"]}
ModzabazeR/small-okaberintaro
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Okabe Rintaro DialoGPT Model
[ "# Okabe Rintaro DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Okabe Rintaro DialoGPT Model" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 207.6065 - Wer: 1.5484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
{"language": ["ab"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
Mofe/speech-sprint-test
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer", "ab", "dataset:common_voice", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ab" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us
# This model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 207.6065 - Wer: 1.5484 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
[ "# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 207.6065\n- Wer: 1.5484", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu113\n- Datasets 1.18.4.dev0\n- Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us \n", "# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 207.6065\n- Wer: 1.5484", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2+cu113\n- Datasets 1.18.4.dev0\n- Tokenizers 0.11.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HA dataset. It achieves the following results on the evaluation set: - Loss: 0.4998 - Wer: 0.5153 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 80.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0021 | 8.33 | 500 | 2.9059 | 1.0 | | 2.6604 | 16.66 | 1000 | 2.6402 | 0.9892 | | 1.2216 | 24.99 | 1500 | 0.6051 | 0.6851 | | 1.0754 | 33.33 | 2000 | 0.5408 | 0.6464 | | 0.9582 | 41.66 | 2500 | 0.5521 | 0.5935 | | 0.8653 | 49.99 | 3000 | 0.5156 | 0.5550 | | 0.7867 | 58.33 | 3500 | 0.5439 | 0.5606 | | 0.7265 | 66.66 | 4000 | 0.4863 | 0.5255 | | 0.6699 | 74.99 | 4500 | 0.5050 | 0.5169 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.0
{"language": ["ha"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "ha"}, "metrics": [{"type": "wer", "value": 51.31, "name": "Test WER"}]}]}]}
Mofe/xls-r-hausa-40
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "ha", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ha" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #ha #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - HA dataset. It achieves the following results on the evaluation set: * Loss: 0.4998 * Wer: 0.5153 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 9.6e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 80.0 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.17.0.dev0 * Pytorch 1.10.2+cu113 * Datasets 1.18.4.dev0 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 80.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #ha #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9.6e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 80.0\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.4.dev0\n* Tokenizers 0.11.0" ]
token-classification
spacy
| Feature | Description | | --- | --- | | **Name** | `en_pipeline` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.0,<3.2.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `ner`, `attribute_ruler`, `lemmatizer` | | **Components** | `tok2vec`, `tagger`, `parser`, `ner`, `attribute_ruler`, `lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (114 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`ner`** | `ARC`, `AST`, `BOOK`, `CAUSAL`, `COMPARISON`, `DATE`, `HEM`, `HOUR`, `HYPO`, `INSTRUMENT`, `JUDGEMENT`, `LAWS`, `MODEL`, `NAME`, `Observation`, `PAR`, `PLACE`, `QUANTITY`, `REASON`, `ZOD` | </details> ### Accuracy | Type | Score | | --- | --- | | `TAG_ACC` | 0.00 | | `DEP_UAS` | 0.00 | | `DEP_LAS` | 0.00 | | `DEP_LAS_PER_TYPE` | 0.00 | | `SENTS_P` | 100.00 | | `SENTS_R` | 100.00 | | `SENTS_F` | 100.00 | | `ENTS_F` | 99.32 | | `ENTS_P` | 99.47 | | `ENTS_R` | 99.17 | | `LEMMA_ACC` | 0.00 | | `NER_LOSS` | 7790.09 |
{"language": ["en"], "tags": ["spacy", "token-classification"]}
MohaAM/en_pipeline
null
[ "spacy", "token-classification", "en", "model-index", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #spacy #token-classification #en #model-index #region-us
### Label Scheme View label scheme (114 labels for 3 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (114 labels for 3 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #en #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (114 labels for 3 components)", "### Accuracy" ]
null
null
utyuiue6
{}
MohamedH/object
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
utyuiue6
[]
[ "TAGS\n#region-us \n" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertweet-finetuned-rbam This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3971 - F1: 0.6620 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7138 | 1.0 | 1632 | 0.7529 | 0.6814 | | 0.5692 | 2.0 | 3264 | 0.8473 | 0.6803 | | 0.4126 | 3.0 | 4896 | 1.0029 | 0.6617 | | 0.2854 | 4.0 | 6528 | 1.2167 | 0.6635 | | 0.2007 | 5.0 | 8160 | 1.3971 | 0.6620 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
{"tags": ["generated_from_trainer"], "metrics": ["f1"], "model-index": [{"name": "bertweet-finetuned-rbam", "results": []}]}
MohammadABH/bertweet-finetuned-rbam
null
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
bertweet-finetuned-rbam ======================= This model is a fine-tuned version of vinai/bertweet-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.3971 * F1: 0.6620 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.16.2 * Pytorch 1.10.0+cu111 * Datasets 1.18.3 * Tokenizers 0.11.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-dec2021_rbam_fine_tuned This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8295 - Accuracy: 0.6777 - Precision: 0.6743 - Recall: 0.6777 - F1: 0.6753 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.8455 | 1.0 | 3264 | 0.7663 | 0.6661 | 0.6802 | 0.6661 | 0.6693 | | 0.6421 | 2.0 | 6528 | 0.8295 | 0.6777 | 0.6743 | 0.6777 | 0.6753 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
{"tags": ["generated_from_trainer"], "metrics": ["accuracy", "precision", "recall", "f1"], "model-index": [{"name": "twitter-roberta-base-dec2021_rbam_fine_tuned", "results": []}]}
MohammadABH/twitter-roberta-base-dec2021_rbam_fine_tuned
null
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
twitter-roberta-base-dec2021\_rbam\_fine\_tuned =============================================== This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-dec2021 on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.8295 * Accuracy: 0.6777 * Precision: 0.6743 * Recall: 0.6777 * F1: 0.6753 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.17.0 * Pytorch 1.10.0+cu111 * Datasets 2.0.0 * Tokenizers 0.11.6
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu111\n* Datasets 2.0.0\n* Tokenizers 0.11.6" ]
[ "TAGS\n#transformers #pytorch #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.17.0\n* Pytorch 1.10.0+cu111\n* Datasets 2.0.0\n* Tokenizers 0.11.6" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Mohsin272/DialoGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-generation
transformers
名言推論モデル
{"language": ["ja"]}
Momerio/meigen_generate_Japanese
null
[ "transformers", "pytorch", "gpt2", "text-generation", "ja", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja" ]
TAGS #transformers #pytorch #gpt2 #text-generation #ja #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
名言推論モデル
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #ja #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Mona/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 23044997 - CO2 Emissions (in grams): 4.819872182577655 ## Validation Metrics - Loss: 0.001594889909029007 - Accuracy: 0.9997478885667465 - Macro F1: 0.9991190902836993 - Micro F1: 0.9997478885667465 - Weighted F1: 0.9997476735518704 - Macro Precision: 0.9998014460161265 - Micro Precision: 0.9997478885667465 - Weighted Precision: 0.9997479944069787 - Macro Recall: 0.9984426545713851 - Micro Recall: 0.9997478885667465 - Weighted Recall: 0.9997478885667465 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Monsia/autonlp-tweets-classification-23044997 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Monsia/autonlp-tweets-classification-23044997", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "en", "tags": "autonlp", "datasets": ["Monsia/autonlp-data-tweets-classification"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 4.819872182577655}
Monsia/autonlp-tweets-classification-23044997
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:Monsia/autonlp-data-tweets-classification", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-Monsia/autonlp-data-tweets-classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 23044997 - CO2 Emissions (in grams): 4.819872182577655 ## Validation Metrics - Loss: 0.001594889909029007 - Accuracy: 0.9997478885667465 - Macro F1: 0.9991190902836993 - Micro F1: 0.9997478885667465 - Weighted F1: 0.9997476735518704 - Macro Precision: 0.9998014460161265 - Micro Precision: 0.9997478885667465 - Weighted Precision: 0.9997479944069787 - Macro Recall: 0.9984426545713851 - Micro Recall: 0.9997478885667465 - Weighted Recall: 0.9997478885667465 ## Usage You can use cURL to access this model: Or Python API:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 23044997\n- CO2 Emissions (in grams): 4.819872182577655", "## Validation Metrics\n\n- Loss: 0.001594889909029007\n- Accuracy: 0.9997478885667465\n- Macro F1: 0.9991190902836993\n- Micro F1: 0.9997478885667465\n- Weighted F1: 0.9997476735518704\n- Macro Precision: 0.9998014460161265\n- Micro Precision: 0.9997478885667465\n- Weighted Precision: 0.9997479944069787\n- Macro Recall: 0.9984426545713851\n- Micro Recall: 0.9997478885667465\n- Weighted Recall: 0.9997478885667465", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-Monsia/autonlp-data-tweets-classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 23044997\n- CO2 Emissions (in grams): 4.819872182577655", "## Validation Metrics\n\n- Loss: 0.001594889909029007\n- Accuracy: 0.9997478885667465\n- Macro F1: 0.9991190902836993\n- Micro F1: 0.9997478885667465\n- Weighted F1: 0.9997476735518704\n- Macro Precision: 0.9998014460161265\n- Micro Precision: 0.9997478885667465\n- Weighted Precision: 0.9997479944069787\n- Macro Recall: 0.9984426545713851\n- Micro Recall: 0.9997478885667465\n- Weighted Recall: 0.9997478885667465", "## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:" ]
text-classification
transformers
# camembert-fr-covid-tweet-classification This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2. This model reaches an accuracy of 66.00% on the dev set. In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes: - chiffres : this means, the tweet talk about statistics of covid. - mesures : this means, the tweet talk about measures take by government of covid - opinions : this means, the tweet talk about opinion of people like fake new. - symptomes : this means, the tweet talk about symptoms or variant of covid. - divers : or other # Pipelining the Model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("Monsia/camembert-fr-covid-tweet-classification") model = AutoModelForSequenceClassification.from_pretrained("Monsia/camembert-fr-covid-tweet-classification") nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer) nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...") # Output: [{'label': 'opinions', 'score': 0.831] ```
{"language": ["fr"], "license": "apache-2.0", "tags": ["classification"], "metrics": ["accuracy"], "widget": [{"text": "tchai on est morts. on va se faire vacciner et ils vont contr\u00f4ler comme les marionnettes avec des fils. d'apr\u00e8s les 'ont dit'..."}]}
Monsia/camembert-fr-covid-tweet-classification
null
[ "transformers", "pytorch", "camembert", "text-classification", "classification", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #camembert #text-classification #classification #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# camembert-fr-covid-tweet-classification This model is a fine-tune checkpoint of Yanzhu/bertweetfr-base, fine-tuned on SST-2. This model reaches an accuracy of 66.00% on the dev set. In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes: - chiffres : this means, the tweet talk about statistics of covid. - mesures : this means, the tweet talk about measures take by government of covid - opinions : this means, the tweet talk about opinion of people like fake new. - symptomes : this means, the tweet talk about symptoms or variant of covid. - divers : or other # Pipelining the Model
[ "# camembert-fr-covid-tweet-classification\nThis model is a fine-tune checkpoint of Yanzhu/bertweetfr-base, fine-tuned on SST-2.\nThis model reaches an accuracy of 66.00% on the dev set.\n\nIn this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes:\n- chiffres : this means, the tweet talk about statistics of covid.\n- mesures : this means, the tweet talk about measures take by government of covid \n- opinions : this means, the tweet talk about opinion of people like fake new. \n- symptomes : this means, the tweet talk about symptoms or variant of covid.\n- divers : or other\n \n # Pipelining the Model" ]
[ "TAGS\n#transformers #pytorch #camembert #text-classification #classification #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# camembert-fr-covid-tweet-classification\nThis model is a fine-tune checkpoint of Yanzhu/bertweetfr-base, fine-tuned on SST-2.\nThis model reaches an accuracy of 66.00% on the dev set.\n\nIn this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes:\n- chiffres : this means, the tweet talk about statistics of covid.\n- mesures : this means, the tweet talk about measures take by government of covid \n- opinions : this means, the tweet talk about opinion of people like fake new. \n- symptomes : this means, the tweet talk about symptoms or variant of covid.\n- divers : or other\n \n # Pipelining the Model" ]
text-classification
transformers
# camembert-fr-covid-tweet-sentiment-classification This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2. This model reaches an accuracy of 71% on the dev set. In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes: - 0 : negatif - 1 : neutre - 2 : positif # Pipelining the Model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline tokenizer = AutoTokenizer.from_pretrained("data354/camembert-fr-covid-tweet-sentiment-classification") model = AutoModelForSequenceClassification.from_pretrained("data354/camembert-fr-covid-tweet-sentiment-classification") nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer) nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...") # Output: [{'label': 'opinions', 'score': 0.831] ```
{"language": ["fr"], "license": "apache-2.0", "tags": ["classification"], "metrics": ["accuracy"], "widget": [{"text": "tchai on est morts. on va se faire vacciner et ils vont contr\u00f4ler comme les marionnettes avec des fils. d'apr\u00e8s les 'ont dit'..."}]}
data354/camembert-fr-covid-tweet-sentiment-classification
null
[ "transformers", "pytorch", "camembert", "text-classification", "classification", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "fr" ]
TAGS #transformers #pytorch #camembert #text-classification #classification #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# camembert-fr-covid-tweet-sentiment-classification This model is a fine-tune checkpoint of Yanzhu/bertweetfr-base, fine-tuned on SST-2. This model reaches an accuracy of 71% on the dev set. In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes: - 0 : negatif - 1 : neutre - 2 : positif # Pipelining the Model
[ "# camembert-fr-covid-tweet-sentiment-classification\nThis model is a fine-tune checkpoint of Yanzhu/bertweetfr-base, fine-tuned on SST-2.\nThis model reaches an accuracy of 71% on the dev set.\nIn this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes:\n- 0 : negatif\n- 1 : neutre \n- 2 : positif", "# Pipelining the Model" ]
[ "TAGS\n#transformers #pytorch #camembert #text-classification #classification #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# camembert-fr-covid-tweet-sentiment-classification\nThis model is a fine-tune checkpoint of Yanzhu/bertweetfr-base, fine-tuned on SST-2.\nThis model reaches an accuracy of 71% on the dev set.\nIn this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes:\n- 0 : negatif\n- 1 : neutre \n- 2 : positif", "# Pipelining the Model" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-model-lg-data This model is a fine-tuned version of [Monsia/test-model-lg-data](https://huggingface.co/Monsia/test-model-lg-data) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3354 - Wer: 0.4150 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0236 | 0.67 | 100 | 0.4048 | 0.4222 | | 0.0304 | 1.35 | 200 | 0.4266 | 0.4809 | | 0.0545 | 2.03 | 300 | 0.4309 | 0.4735 | | 0.0415 | 2.7 | 400 | 0.4269 | 0.4595 | | 0.033 | 3.38 | 500 | 0.4085 | 0.4537 | | 0.0328 | 4.05 | 600 | 0.3642 | 0.4224 | | 0.0414 | 4.73 | 700 | 0.3354 | 0.4150 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "test-model-lg-data", "results": []}]}
Monsia/test-model-lg-data
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
test-model-lg-data ================== This model is a fine-tuned version of Monsia/test-model-lg-data on the common\_voice dataset. It achieves the following results on the evaluation set: * Loss: 0.3354 * Wer: 0.4150 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 200 * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.11.3 * Pytorch 1.10.0+cu113 * Datasets 1.13.3 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu113\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu113\n* Datasets 1.13.3\n* Tokenizers 0.10.3" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": []}]}
Mood/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of distilbert-base-uncased on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
[ "# distilbert-base-uncased-finetuned-ner\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.15.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# distilbert-base-uncased-finetuned-ner\n\nThis model is a fine-tuned version of distilbert-base-uncased on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.15.1\n- Tokenizers 0.10.3" ]
text-generation
transformers
# Nyivae DialoGPT Model
{"tags": ["conversational"]}
MoonlitEtherna/DialoGPT-small-Nyivae
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Nyivae DialoGPT Model
[ "# Nyivae DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Nyivae DialoGPT Model" ]
zero-shot-classification
transformers
# DeBERTa-v3-base-mnli-fever-anli ## Model description This model was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. This base model outperforms almost all large models on the [ANLI benchmark](https://github.com/facebookresearch/anli). The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli. ### How to use the model #### Simple zero-shot classification pipeline ```python #!pip install transformers[sentencepiece] from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli") sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data DeBERTa-v3-base-mnli-fever-anli was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. ### Training procedure DeBERTa-v3-base-mnli-fever-anli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the test sets for MultiNLI and ANLI and the dev set for Fever-NLI. The metric used is accuracy. mnli-m | mnli-mm | fever-nli | anli-all | anli-r3 ---------|----------|---------|----------|---------- 0.903 | 0.903 | 0.777 | 0.579 | 0.495 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues. Also make sure to install sentencepiece to avoid tokenizer errors. Run: `pip install transformers[sentencepiece]` or `pip install sentencepiece` ## Model Recycling [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=0.65&mnli_lp=nan&20_newsgroup=-0.61&ag_news=-0.01&amazon_reviews_multi=0.46&anli=0.84&boolq=2.12&cb=16.07&cola=-0.76&copa=8.60&dbpedia=-0.40&esnli=-0.29&financial_phrasebank=-1.98&imdb=-0.47&isear=-0.22&mnli=-0.21&mrpc=0.50&multirc=1.91&poem_sentiment=1.73&qnli=0.07&qqp=-0.37&rotten_tomatoes=-0.74&rte=3.94&sst2=-0.45&sst_5bins=0.07&stsb=1.27&trec_coarse=-0.16&trec_fine=0.18&tweet_ev_emoji=-0.93&tweet_ev_emotion=-1.33&tweet_ev_hate=-1.67&tweet_ev_irony=-5.46&tweet_ev_offensive=-0.17&tweet_ev_sentiment=-0.11&wic=-0.21&wnli=-1.20&wsc=4.18&yahoo_answers=-0.70&model_name=MoritzLaurer%2FDeBERTa-v3-base-mnli-fever-anli&base_name=microsoft%2Fdeberta-v3-base) using MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli as a base model yields average score of 79.69 in comparison to 79.04 by microsoft/deberta-v3-base. The model is ranked 2nd among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023. Results: | 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers | |---------------:|----------:|-----------------------:|-------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|-------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:| | 85.8072 | 90.4333 | 67.32 | 59.625 | 85.107 | 91.0714 | 85.8102 | 67 | 79.0333 | 91.6327 | 82.5 | 94.02 | 71.6428 | 89.5749 | 89.7059 | 64.1708 | 88.4615 | 93.575 | 91.4148 | 89.6811 | 86.2816 | 94.6101 | 57.0588 | 91.5508 | 97.6 | 91.2 | 45.264 | 82.6179 | 54.5455 | 74.3622 | 84.8837 | 71.6949 | 71.0031 | 69.0141 | 68.2692 | 71.3333 | For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
{"language": ["en"], "license": "mit", "tags": ["text-classification", "zero-shot-classification"], "datasets": ["multi_nli", "anli", "fever"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification", "model-index": [{"name": "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli", "results": [{"task": {"type": "natural-language-inference", "name": "Natural Language Inference"}, "dataset": {"name": "anli", "type": "anli", "config": "plain_text", "split": "test_r3"}, "metrics": [{"type": "accuracy", "value": 0.495, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWViYjQ5YTZlYjU4NjQyN2NhOTVhNjFjNGQyMmFiNmQyZjRkOTdhNzJmNjc3NGU4MmY0MjYyMzY5MjZhYzE0YiIsInZlcnNpb24iOjF9.S8pIQ7gEGokd_wKXMi6Bc3B2DThIP3cvVkTFErZ-2JxXTSCy1TBuulY3dzGfaiP7kTHbL52OuBhG_-wb7Ue9DQ"}, {"type": "precision", "value": 0.4984740618243923, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTllZDU3NmVmYjk4ZmYzNjAwNzExMGZjNDMzOWRkZjRjMTRhNzhlZmI0ZmNlM2E0Mzk4OWE5NTM5MTYyYWU5NCIsInZlcnNpb24iOjF9.WHz_TUJgPVn-rU-9vBCDdmSMOuWzADwr09rJY6ktqRM46zytbyWs7Vcm7jqDrTkfU-rp0_7IyoNv_xEsKhJbBA"}, {"type": "precision", "value": 0.495, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjllODE3ZjUxZDhiMTI0MzZmYjY5OTUwYWI2OTc4ZjJhNTVjMjY2ODdkMmJlZjQ5YWQ1Mjk2ZThmYjJlM2RlYSIsInZlcnNpb24iOjF9.a9V06-O7l9S0Bv4vj0aard8128SAP61DZdXl_3XqdmNgt_C6KAoDBVueF2M2kF_kT6lRfEz6YW0ACIfJNXDYAA"}, {"type": "precision", "value": 0.4984357572868885, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjhiMzYzY2JiMmYwN2YxYzEwZTQ3NGI1NzFmMzliNjJkMDE2YzI5Njg1ZjEzMGIxODdiMDNmYmI4Y2Y2MmJkMiIsInZlcnNpb24iOjF9.xvZZaUMogw9MJjb3ls6h5liDlTqHMmNgqk6KbyDqQWfCcD255brCU3Xo6nECwaChS4te0dQu_iWGBqR_o2kYAA"}, {"type": "recall", "value": 0.49461028192371476, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDVjYTEzOTI0ZjVhOTk3ZTkzZmZhNTk5ODcxMWJhYWU4ZTRjYWVhNzcwOWY5YmI2NGFlYWE4NjM5MDY5NTExOSIsInZlcnNpb24iOjF9.xgHCB2rbCQBzHzUokw4u8JyOdhtF4yvPv1t8t7YiEkaAuM5MAPsVuCZ1VtlLapHS_IWetlocizsVl6akjh3cAQ"}, {"type": "recall", "value": 0.495, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTEyYmM0ZDQ0M2RiMDNhNjIxNzQ4OWZiNTBiOTAwZDFkNjNmYjBhNjA4NmQ0NjFkNmNiZTljNDkxNDg3NzIyYSIsInZlcnNpb24iOjF9.3FJPwNtwgFNvMjVxVAayaVXXR1sWlr0sqAYmXzmMzMxl7IJh6RS77dGPwFaqD3jamLVBiqPn9wsfz5lFK5yTAA"}, {"type": "recall", "value": 0.495, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmY1MjZlZTQ4OTg5YzdlYmFhZDMzMmNlNjNkYmIyZGI4M2NjZjQ1ZDVkNmZkMTUxNjI3M2UwZmI1MDM1NDYwOSIsInZlcnNpb24iOjF9.cnbM6xjTLRa9z0wEDGd_Q4lTXVLRKIQ6_YLGLjf-t7Nto4lzxAeWF-RrwA0Mq9OPITlJq2Jk1Eg_0Utb13d9Dg"}, {"type": "f1", "value": 0.4942810999491704, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2U3NGM1MDM4YTM4NzQxMGM4ZTIyZDM2YTQ1MGNlZWM1MzEzM2MxN2ZmZmRmYTM0OWJmZGJjYjM5OWEzMmZjNSIsInZlcnNpb24iOjF9.vMtge1F-tmMn9D3aVUuwcNEXjqpNgEyHAl9f5UDSoTYcOgTwi2vi5yRGRCl8y6Fx7BtgaCwMyoZVNbP5-GRtCA"}, {"type": "f1", "value": 0.495, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjBjMTQ5MmQ5OGE5OWJjZGMyNzg4N2RmNDUzMzQ5Zjc4ZTc4N2JlMTk0MTc2M2RjZTgzOTNlYWQzODAwNDI0NCIsInZlcnNpb24iOjF9.yxXG0CNWW8__xJC14BjbTY9QkXD75x6uCIXR51oKDemkP0b_xGyd-A2wPIuwNJN1EYkQevPY0bhVpRWBKyO9Bg"}, {"type": "f1", "value": 0.4944671868893595, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzczNjQzY2FmMmY4NTAwYjNkYjJlN2I2NjI2Yjc0ZmQ3NjZiN2U5YWEwYjk4OTUyOTMzZTYyZjYzOTMzZGU2YiIsInZlcnNpb24iOjF9.mLOnst2ScPX7ZQwaUF12W2nv7-w9lX9-BxHl3-0T0gkSWnmtBSwYcL5faTX0_I5q33Fjz5tfkjpCJuxP5JYIBQ"}, {"type": "loss", "value": 1.8788293600082397, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzRlOTYwYjU1Y2Y4ZGM0NDBjYTE2MmEzNWIwN2NiMWVkOWZlNzA2ZmQ3YjZjNzI4MjQwYWZhODIwMzU3ODAyZiIsInZlcnNpb24iOjF9._Xs9bl48MSavvp5eyamrP2iNlFWv35QZCrmWjJXLkUdIBx0ElCjEdxBb3dxPGnUxdpDzGMmOoKCPI44ZPXrtDw"}]}, {"task": {"type": "natural-language-inference", "name": "Natural Language Inference"}, "dataset": {"name": "anli", "type": "anli", "config": "plain_text", "split": "test_r1"}, "metrics": [{"type": "accuracy", "value": 0.712, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWYxMGY0ZWU0YTEyY2I3NmQwZmQ3YmFmNzQxNGU5OGNjN2ViN2I0ZjdkYWUzM2RmYzkzMDg3ZjVmNGYwNGZkZCIsInZlcnNpb24iOjF9.snWBusAeo1rrQqWk--vTxb-CBcFqM298YCtwTQGBZiFegKGSTSKzj-SM6HMNsmoQWmMuv7UfYPqYlnzEthOSAg"}, {"type": "precision", "value": 0.7134839439315348, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjMxMjg1Y2QwNzMwM2ZkNGM3ZTJhOGJmY2FkNGI1ZTFhOGQ3ODViNTJmZTYwMWJkZDYyYWRjMzFmZDI1NTM5YSIsInZlcnNpb24iOjF9.ZJnY6zYOBn-YEtN7uKzQ-VKXPwlIO1zq19Yuo37vBJNSs1dGDd8f1jgfdZuA19e_wA3Nc5nQKe9VXRwPHPgwAQ"}, {"type": "precision", "value": 0.712, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWM4YWQyODBlYTIwMWQxZDA1NmY1M2M2ODgwNDJiY2RhMDVhYTlkMDUzZTJkMThkYzRmNDg2YTdjMjczNGUwOCIsInZlcnNpb24iOjF9.SogsKHdbdlEs05IBYwXvlnaC_esg-DXAPc2KPRyHaVC5ItVHbxa63NpybSpao4baOoMlLG9aRe7TjG4gtB2dAQ"}, {"type": "precision", "value": 0.7134676028447461, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODdjMzFkM2IwNWZiM2I4ZWViMmQ4NWM5MDY5ZWQxZjc1MGRmNjhmNzJhYWFmOWEwMjg3ZjhiZWM3YjlhOTIxNSIsInZlcnNpb24iOjF9._0JNIbiqLuDZrp_vrCljBe28xexZJPmigLyhkcO8AtH2VcNxWshwCpZuRF4bqvpMvnApJeuGMf3vXjCj0MC1Bw"}, {"type": "recall", "value": 0.7119814425203647, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjU4MWEyMzkyYzg1ZTIxMTc0M2NhMTgzOGEyZmY5OTg3M2Q1ZmMwNmU3ZmU1ZjA1MDk0OGZkMzM5NDVlZjBlNSIsInZlcnNpb24iOjF9.sZ3GTcmGGthpTLL7_Zovq8aBmE3Dp_PZi5v8ZI9yG9N6B_GjWvBuPC8ENXK1NwmwiHLsSvtKTG5JmAum-su0Dg"}, {"type": "recall", "value": 0.712, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDg3NGViZTlmMWM2ZDNhMzIzZGZkYWZhODQxNzg2MjNiNjQ0Zjg0NjQ1OWZkY2I5ODdiY2Y3Y2JjNzRmYjJkMiIsInZlcnNpb24iOjF9.bCZUzJamsozKWehnNph6E5coww5zZTrJdbWevWrSyfT0PyXc_wkZ-NKdyBAoqprBz3_8L3i5hPM6Qsy56b4BDA"}, {"type": "recall", "value": 0.712, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDk1MDJiOGUzZThlZjJjMzY4NjMzODFiZjUzZmIwMjIxY2UwNzBiN2IxMWEwMGJjZTkxODA0YzUxZDE3ODRhOCIsInZlcnNpb24iOjF9.z0dqvB3aBVYt3xRIb_M4svWebfQc0QaDFVFzHnlA5QGEHkHOW3OecGhHE4EzBqTDI3DASWZTGMjrMDDt0uOMBw"}, {"type": "f1", "value": 0.7119226991285647, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2U0YjMwNzhmOTEyNDZhODU3MTU0YTM4MmQ0NzEzNWI1YjY0ZWQ3MWRiMTdiNTUzNWRkZThjMWE4M2NkZmI0MiIsInZlcnNpb24iOjF9.hhj1BXkuWi9wXrCjT9NwqaPETtOoYNiyqYsJEw-ufA8A4hVThKA6ZBtma1Q_M65-DZFfPEBDBNASLZ7EPSbmDw"}, {"type": "f1", "value": 0.712, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODk0Y2EyMzc5M2ZlNWFlNDg2Zjc1OTQxNGY3YjA5YjUxYTYzZjRlZmU4ODYxNjA3ZjkxNGUzYjBmNmMxMzY5YiIsInZlcnNpb24iOjF9.DvKk-3hNh2LhN2ug5e0FgUntL3Ozdfl06Kz7jvmB-deOJH6INi2a2ZySXoEePoo8t2nR6ENFYu9QjMA2ojnpCA"}, {"type": "f1", "value": 0.7119242267218338, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2MxOWFlMmI2NGRiMjkwN2Q5MWZhNDFlYzQxNWNmNzQ3OWYxZThmNDU2OWU1MTE5OGY2MWRlYWUyNDM3OTkzZCIsInZlcnNpb24iOjF9.QrTD1gE8_wRok9u59W-Mx0cX89K-h2Ad6qa8J5rmP8lc_rkG0ft2n5_GqH1CBZBJwMFYv91Pn6TuE3eGxJuUDA"}, {"type": "loss", "value": 1.0105403661727905, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmUwMTg4NjM3ZTBiZTIyODcyNDNmNTE5ZDZhMzNkMDMyNjcwOGQ5NmY0NTlhMjgyNmIzZjRiNDFiNjA3M2RkZSIsInZlcnNpb24iOjF9.sjBDVJV-jnygwcppmByAXpoo-Wzz178bBzozJEuYEiJaHSbk_xEevfJS1PmLUuplYslKb1iyEctnjI-5bl-XDw"}]}, {"task": {"type": "natural-language-inference", "name": "Natural Language Inference"}, "dataset": {"name": "multi_nli", "type": "multi_nli", "config": "default", "split": "validation_mismatched"}, "metrics": [{"type": "accuracy", "value": 0.902766476810415, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjExZWM3YzA3ZDNlNjEwMmViNWEwZTE3MjJjNjEyNDhjOTQxNGFmMzBjZTk0ODUwYTc2OGNiZjYyMTBmNWZjZSIsInZlcnNpb24iOjF9.zbFAGrv2flpmweqS7Poxib7qHFLdW8eUTzshdOm2B9H-KWpIZCWC-P4p8TLMdNJnUcZJZ03Okil4qjIMqqIRCA"}, {"type": "precision", "value": 0.9023816542652491, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2U2MGViNmJjNWQxNzRjOTkxNDIxZjZjNmM5YzE4ZjU5NTE5NjFlNmEzZWRlOGYxN2E3NTAwMTEwYjNhNzE0YSIsInZlcnNpb24iOjF9.WJjDJf56FROvf7Y5ShWnnxMvK_ZpQ2PibAOtSFhSiYJ7bt4TGOzMwaZ5RSTf_mcfXgRfWbXmy1jCwNhDb-5EAw"}, {"type": "precision", "value": 0.902766476810415, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzRhZTExOTc5NDczZjI1YmMzOGYyOTU2MDU1OGE5ZTczMDE0MmU0NzZhY2YzMDI1ZGQ3MGM5MmJiODFkNzUzZiIsInZlcnNpb24iOjF9.aRYcGEI1Y8-a0d8XOoXhBgsFyj9LWNwEjoIPc594y7kJn91wXIsXoR0-_0iy3uz41mWaTTlwJx7lI-kipFDvDQ"}, {"type": "precision", "value": 0.9034597464719761, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWQyMTZiZDA2OTUwZjRmNTFiMWRlZTNmOTliZmI2MWFmMjdjYzEyYTgwNzkyOTQzOTBmNTUyYjMwNTUxMTFkNiIsInZlcnNpb24iOjF9.hUtAMTl0THHUkaLcgk1Vy9IhjqJAXCJ_5STJ5A7k7s_SO9DHp3b6qusgwPmcGLYyPy1-j1dB2AIstxK4tHfmDA"}, {"type": "recall", "value": 0.9024304801555488, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzAxZGJhNGI3ZDNlMjg2ZDIxNTgwMDY5MTFjM2ExZmIxMDBmZjUyNTliNWNkOGI0OTY3NTYyNWU3OWFlYTA3YiIsInZlcnNpb24iOjF9.1o_GNq8zmXa_50MUF_K63IDc2aUKNeUkNQ5fT592-SAo8WgiaP9Dh6bOEu2OqrpRQ57P4qm7OdJt7UKsrosMDA"}, {"type": "recall", "value": 0.902766476810415, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjhiMWE4Yjk0ODFkZjlkYjRlMjU1OTJmMjA2Njg1N2M4MzQ0OWE3N2FlYjY4NDgxZThjMmExYWQ5OGNmYmI1NSIsInZlcnNpb24iOjF9.Gmm5lf_qpxjXWWrycDze7LHR-6WGQc62WZTmcoc5uxWd0tivEUqCAFzFdbEU1jVKxQBIyDX77CPuBm7mUA4sCg"}, {"type": "recall", "value": 0.902766476810415, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2EzZWYwNjNkYWE1YTcyZGZjNTNhMmNlNzgzYjk5MGJjOWJmZmE5NmYwM2U2NTA5ZDY3ZjFiMmRmZmQwY2QwYiIsInZlcnNpb24iOjF9.yA68rslg3e9kUR3rFTNJJTAad6Usr4uFmJvE_a7G2IvSKqLxG_pqsHszsWfg5mFBQLjWEAyCtdQYMdVayuYMBA"}, {"type": "f1", "value": 0.9023086094638595, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzMyMzZhNjI5MWRmZWJhMjkzN2E0MjM4ZTM5YzZmNTk5YTZmYzU4NDRiYjczZGQ4MDdhNjJiMGU0MjE3NDEwNyIsInZlcnNpb24iOjF9.RCMqH_xUMN97Vos54pTFfAMbLstXUMdFTs-eNaypbDb_Fc-MW8NLmJ6dzJsp9sSvhXyYjugjRMUpMpnQseKXDA"}, {"type": "f1", "value": 0.902766476810415, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTYxZTZhZGM0NThlNTAzNmYwMTA4NDNkN2FiNzhhN2RlYThlYjcxMjE5MjBkMzhiOGYxZGRmMjE0NGM2ZWQ5ZSIsInZlcnNpb24iOjF9.wRfllNw2Gibmi1keU7d_GjkyO0F9HESCgJlJ9PHGZQRRT414nnB-DyRvulHjCNnaNjXqMi0LJimC3iBrNawwAw"}, {"type": "f1", "value": 0.9030161011457231, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDA0YjAxMWU5MjI4MWEzNTNjMzJlNjM3ZDMxOTE0ZTZhYmZlNmUyNDViNTU2NmMyMmM3MjAxZWVjNWJmZjI4MCIsInZlcnNpb24iOjF9.vJ8aUjfTbFMc1BgNUVpoVDuYwQJYQjwZQxblkUdvSoGtkW_AzQJ_KJ8Njc7IBA3ADgj8iZHjRQNIZkFCf-xICw"}, {"type": "loss", "value": 0.3283354640007019, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODdmYzYzNTUzZDNmOWIxM2E0ZmUyOWUzM2Y2NGRmZDNiYjg3ZTMzYTUyNzg3OWEzNzYyN2IyNmExOGRlMWUxYSIsInZlcnNpb24iOjF9.Qv0FzFZPkcBs9aHGf4TEREX4jdkc40NazdMlP2M_-w2wHwyjoAjvhk611RLXHcbicozNelZJLnsOMdEMnPLEDg"}]}, {"task": {"type": "natural-language-inference", "name": "Natural Language Inference"}, "dataset": {"name": "anli", "type": "anli", "config": "plain_text", "split": "dev_r1"}, "metrics": [{"type": "accuracy", "value": 0.737, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTQ1ZGVkOTVmNTlhYjhkMjVlNTNhMjNmZWFjZWZjZjcxZmRhMDVlOWI0YTdkOTMwYjVjNWFlOGY4OTc1MmRhNiIsInZlcnNpb24iOjF9.wGLgKA1E46ljbLokdPeip_UCr1gqK8iSSbsJKX2vgKuuhDdUWWiECrUFN-bv_78JWKoKW5T0GF_hb-RVDzA0AQ"}, {"type": "precision", "value": 0.737681071614645, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmFkMGUwMjNhN2E3NzMxNTc5NDM0MjY1MGU5ODllM2Q2YzA1MDI3OGI1ZmI4YTcxN2E4ZDk5OWY2OGNiN2I0MCIsInZlcnNpb24iOjF9.6G5qhccjheaNfasgRyrkKBTaQPRzuPMZZ0hrLxTNzAydMDgx09FkFP3hni7WLRMWp0IpwzkEeBlxV-mPyQBtBw"}, {"type": "precision", "value": 0.737, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2QzYjQ4ZDZjOGU5YzI3YmFlMThlYTRkYTUyYWIyNzc4NDkwNzM1OWFiMTgyMzA0NDZmMGI3YTQxODBjM2EwMCIsInZlcnNpb24iOjF9.bvNWyzfct1CLJFx_EuD2GeKieVtyGJy0cwUBP2qJE1ey2i9SVn6n1Dr0AALTGBkxQ6n5-fJ61QFNufpdr2KvCA"}, {"type": "precision", "value": 0.7376755842752241, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2VmYWYzZWQwZmMzMDk0NTdlY2Y3NDkzYWY5ZTdmOGU0ZTUzZWE4YWFhZjVmODhkZmE1Njg4NjA5YjJmYWVhOSIsInZlcnNpb24iOjF9.50FQR2aoBpORLgYa7482ZTrRhT-KfIgv5ltBEHndUBMmqGF9Ru0LHENSGwyD_tO89sGPfiW32TxpbrNWiBdIBA"}, {"type": "recall", "value": 0.7369675064285843, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTM4OTAyNDYwNjY4Zjc5NDljNjBmNTg2Mzk4YjYxM2MyYTA0MDllYTMyNzEwOGI1ZTEwYWE3ZmU0NDZmZDg2NiIsInZlcnNpb24iOjF9.UvWBxuApNV3vd4hpgwqd6XPHCbkA_bB_Cw24ooquiOf0dstvjP3JvpGoDp5SniOzIOg3i2aYbcvFCLJqEXMZCQ"}, {"type": "recall", "value": 0.737, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmQ4MjMzNzRmNTI5NjIzNGQ0ZDFmZTA1MDU3OTk0MzYyMGI0NTMzZTZlMTQ1MDc1MzBkMGMzYjcxZjU1NDNjOSIsInZlcnNpb24iOjF9.kpbdXOpDG3CUB-kUEXsgFT3HWWIbu70wwzs2TNf0rhIuRrzdZz3dXXvwqu1BcLJTsOxl8G6NTiYXgnv-ul8lDg"}, {"type": "recall", "value": 0.737, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmU1ZWJkNWE0NjczY2NiZWYyNzYyMzllNzZmZTIxNWRkYTEyZDgxN2E0NTNmM2ExMTc1ZWVjMzBiYjg0ZmM1MiIsInZlcnNpb24iOjF9.S6HHWCWnut_LJqXbEA_Z8ZOTtyq6V51ZeiA0qbwzr0hapDYZOZHrN4prvSLvoNv-GiYDYKatwIsAZxCZc5fmCA"}, {"type": "f1", "value": 0.7366853496239583, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzkxYmY2NTcyOTE0ZDdjNGY2ZmE4MzQwMGIxZTA2MDg1NzI5YTQ0MTdkZjdkNzNkMDM2NTk2MTNiNjU4ODMwZCIsInZlcnNpb24iOjF9.ECVaCBqGd0pnQT3xJF7yWrgecIb-5TMiVWpEO0MQGhYy43snkI6Qs-2FOXzvfwIWqG-Q6XIIhGbWZh5TFEGKCA"}, {"type": "f1", "value": 0.737, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDMwMWZiNzQyNWEzNmMzMDJjOTAxYzAxNzc0MTNlYzRkZjllYmNjZmU0OTgzZDFkNWM1ZWI5OTA2NzE5Y2YxOSIsInZlcnNpb24iOjF9.8yZFol_Gcj9n3w9Yk5wx48yql7p3wriDecv-6VSTAB6Q_MWLQAWsCEGRRhgGJ3zvhoRehJZdb35ozk36VOinDQ"}, {"type": "f1", "value": 0.7366990292378379, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjhhN2ZkMjc5ZGQ3ZGM1Nzk3ZTgwY2E1N2NjYjdhNjZlOTdhYmRlNGVjN2EwNTIzN2UyYTY2ODVlODhmY2Q4ZCIsInZlcnNpb24iOjF9.Cz7ClDAfCGpqdRTYd5v3dPjXFq8lZLXx8AX_rqmF-Jb8KocqVDsHWeZScW5I2oy951UrdMpiUOLieBuJLOmCCQ"}, {"type": "loss", "value": 0.9349392056465149, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmI4MTI5MDM1NjBmMzgzMzc2NjM5MzZhOGUyNTgyY2RlZTEyYTIzYzY2ZGJmODcxY2Q5OTVjOWU3OTQ2MzM1NSIsInZlcnNpb24iOjF9.bSOFnYC4Y2y2pW1AR-bgPUHKafR-0OHf8PvexK8eQLsS323Xy9-rYkKUaP09KY6_fk9GqAawv5eqj72B_uyeCA"}]}]}]}
MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
null
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:anli", "dataset:fever", "arxiv:2006.03654", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2006.03654" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #dataset-multi_nli #dataset-anli #dataset-fever #arxiv-2006.03654 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
DeBERTa-v3-base-mnli-fever-anli =============================== Model description ----------------- This model was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. This base model outperforms almost all large models on the ANLI benchmark. The base model is DeBERTa-v3-base from Microsoft. The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original DeBERTa paper. For highest performance (but less speed), I recommend using URL ### How to use the model #### Simple zero-shot classification pipeline #### NLI use-case ### Training data DeBERTa-v3-base-mnli-fever-anli was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs. ### Training procedure DeBERTa-v3-base-mnli-fever-anli was trained using the Hugging Face trainer with the following hyperparameters. ### Eval results The model was evaluated using the test sets for MultiNLI and ANLI and the dev set for Fever-NLI. The metric used is accuracy. Limitations and bias -------------------- Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues. Also make sure to install sentencepiece to avoid tokenizer errors. Run: 'pip install transformers[sentencepiece]' or 'pip install sentencepiece' Model Recycling --------------- Evaluation on 36 datasets using MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli as a base model yields average score of 79.69 in comparison to 79.04 by microsoft/deberta-v3-base. The model is ranked 2nd among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023. Results: For more information, see: Model Recycling
[ "### How to use the model", "#### Simple zero-shot classification pipeline", "#### NLI use-case", "### Training data\n\n\nDeBERTa-v3-base-mnli-fever-anli was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs.", "### Training procedure\n\n\nDeBERTa-v3-base-mnli-fever-anli was trained using the Hugging Face trainer with the following hyperparameters.", "### Eval results\n\n\nThe model was evaluated using the test sets for MultiNLI and ANLI and the dev set for Fever-NLI. The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.\n\n\nIf you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL", "### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn", "### Debugging and issues\n\n\nNote that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.\nAlso make sure to install sentencepiece to avoid tokenizer errors. Run: 'pip install transformers[sentencepiece]' or 'pip install sentencepiece'\n\n\nModel Recycling\n---------------\n\n\nEvaluation on 36 datasets using MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli as a base model yields average score of 79.69 in comparison to 79.04 by microsoft/deberta-v3-base.\n\n\nThe model is ranked 2nd among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023.\n\n\nResults:\n\n\n\nFor more information, see: Model Recycling" ]
[ "TAGS\n#transformers #pytorch #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #dataset-multi_nli #dataset-anli #dataset-fever #arxiv-2006.03654 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use the model", "#### Simple zero-shot classification pipeline", "#### NLI use-case", "### Training data\n\n\nDeBERTa-v3-base-mnli-fever-anli was trained on the MultiNLI, Fever-NLI and Adversarial-NLI (ANLI) datasets, which comprise 763 913 NLI hypothesis-premise pairs.", "### Training procedure\n\n\nDeBERTa-v3-base-mnli-fever-anli was trained using the Hugging Face trainer with the following hyperparameters.", "### Eval results\n\n\nThe model was evaluated using the test sets for MultiNLI and ANLI and the dev set for Fever-NLI. The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.\n\n\nIf you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL", "### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn", "### Debugging and issues\n\n\nNote that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.\nAlso make sure to install sentencepiece to avoid tokenizer errors. Run: 'pip install transformers[sentencepiece]' or 'pip install sentencepiece'\n\n\nModel Recycling\n---------------\n\n\nEvaluation on 36 datasets using MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli as a base model yields average score of 79.69 in comparison to 79.04 by microsoft/deberta-v3-base.\n\n\nThe model is ranked 2nd among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023.\n\n\nResults:\n\n\n\nFor more information, see: Model Recycling" ]
text-classification
transformers
# DeBERTa-v3-base-mnli-fever-docnli-ling-2c ## Model description This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to enable the inclusion of the DocNLI dataset. The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf) as well as the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543). For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli. ### How to use the model #### Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c") sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ``` #### NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "not_entailment"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). ### Training procedure DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy. mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c ---------|----------|---------|----------|----------|------ 0.935 | 0.933 | 0.897 | 0.710 | 0.678 | 0.895 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
{"language": ["en"], "license": "mit", "tags": ["text-classification", "zero-shot-classification"], "metrics": ["accuracy"], "widget": [{"text": "I first thought that I liked the movie, but upon second thought it was actually disappointing. [SEP] The movie was good."}]}
MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c
null
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "en", "arxiv:2104.07179", "arxiv:2106.09449", "arxiv:2006.03654", "arxiv:2111.09543", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.07179", "2106.09449", "2006.03654", "2111.09543" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #arxiv-2104.07179 #arxiv-2106.09449 #arxiv-2006.03654 #arxiv-2111.09543 #license-mit #autotrain_compatible #endpoints_compatible #region-us
DeBERTa-v3-base-mnli-fever-docnli-ling-2c ========================================= Model description ----------------- This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation). It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to enable the inclusion of the DocNLI dataset. The base model is DeBERTa-v3-base from Microsoft. The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original DeBERTa paper as well as the DeBERTa-V3 paper. For highest performance (but less speed), I recommend using URL ### How to use the model #### Simple zero-shot classification pipeline #### NLI use-case ### Training data This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation). ### Training procedure DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters. ### Eval results The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy. Limitations and bias -------------------- Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
[ "### How to use the model", "#### Simple zero-shot classification pipeline", "#### NLI use-case", "### Training data\n\n\nThis model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).", "### Training procedure\n\n\nDeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.", "### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.\n\n\nIf you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL", "### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn", "### Debugging and issues\n\n\nNote that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues." ]
[ "TAGS\n#transformers #pytorch #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #arxiv-2104.07179 #arxiv-2106.09449 #arxiv-2006.03654 #arxiv-2111.09543 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use the model", "#### Simple zero-shot classification pipeline", "#### NLI use-case", "### Training data\n\n\nThis model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: MultiNLI, Fever-NLI, LingNLI and DocNLI (which includes ANLI, QNLI, DUC, CNN/DailyMail, Curation).", "### Training procedure\n\n\nDeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.", "### Eval results\n\n\nThe model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy.\n\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.\n\n\nIf you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. URL", "### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn", "### Debugging and issues\n\n\nNote that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues." ]
zero-shot-classification
transformers
# DeBERTa-v3-base-mnli-fever-anli ## Model description This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs. The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For a more powerful model, check out [DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) which was trained on even more data. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model_name = "MoritzLaurer/DeBERTa-v3-base-mnli" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "neutral", "contradiction"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs. ### Training procedure DeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=5, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the matched test set and achieves 0.90 accuracy. ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. ## Model Recycling [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=0.97&mnli_lp=nan&20_newsgroup=-0.39&ag_news=0.19&amazon_reviews_multi=0.10&anli=1.31&boolq=0.81&cb=8.93&cola=0.01&copa=13.60&dbpedia=-0.23&esnli=-0.51&financial_phrasebank=0.61&imdb=-0.26&isear=-0.35&mnli=-0.34&mrpc=1.24&multirc=1.50&poem_sentiment=-0.19&qnli=0.30&qqp=0.13&rotten_tomatoes=-0.55&rte=3.57&sst2=0.35&sst_5bins=0.39&stsb=1.10&trec_coarse=-0.36&trec_fine=-0.02&tweet_ev_emoji=1.11&tweet_ev_emotion=-0.35&tweet_ev_hate=1.43&tweet_ev_irony=-2.65&tweet_ev_offensive=-1.69&tweet_ev_sentiment=-1.51&wic=0.57&wnli=-2.61&wsc=9.95&yahoo_answers=-0.33&model_name=MoritzLaurer%2FDeBERTa-v3-base-mnli&base_name=microsoft%2Fdeberta-v3-base) using MoritzLaurer/DeBERTa-v3-base-mnli as a base model yields average score of 80.01 in comparison to 79.04 by microsoft/deberta-v3-base. The model is ranked 1st among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023. Results: | 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers | |---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|-------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|--------:|--------:|----------------:| | 86.0196 | 90.6333 | 66.96 | 60.0938 | 83.792 | 83.9286 | 86.5772 | 72 | 79.2 | 91.419 | 85.1 | 94.232 | 71.5124 | 89.4426 | 90.4412 | 63.7583 | 86.5385 | 93.8129 | 91.9144 | 89.8687 | 85.9206 | 95.4128 | 57.3756 | 91.377 | 97.4 | 91 | 47.302 | 83.6031 | 57.6431 | 77.1684 | 83.3721 | 70.2947 | 71.7868 | 67.6056 | 74.0385 | 71.7 | For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)
{"language": ["en"], "tags": ["text-classification", "zero-shot-classification"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification"}
MoritzLaurer/DeBERTa-v3-base-mnli
null
[ "transformers", "pytorch", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "en", "arxiv:2006.03654", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2006.03654" ]
[ "en" ]
TAGS #transformers #pytorch #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #arxiv-2006.03654 #autotrain_compatible #endpoints_compatible #has_space #region-us
DeBERTa-v3-base-mnli-fever-anli =============================== Model description ----------------- This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs. The base model is DeBERTa-v3-base from Microsoft. The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original DeBERTa paper. For a more powerful model, check out DeBERTa-v3-base-mnli-fever-anli which was trained on even more data. Intended uses & limitations --------------------------- #### How to use the model ### Training data This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs. ### Training procedure DeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters. ### Eval results The model was evaluated using the matched test set and achieves 0.90 accuracy. Limitations and bias -------------------- Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ### BibTeX entry and citation info If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn ### Debugging and issues Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. Model Recycling --------------- Evaluation on 36 datasets using MoritzLaurer/DeBERTa-v3-base-mnli as a base model yields average score of 80.01 in comparison to 79.04 by microsoft/deberta-v3-base. The model is ranked 1st among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023. Results: For more information, see: Model Recycling
[ "#### How to use the model", "### Training data\n\n\nThis model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs.", "### Training procedure\n\n\nDeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters.", "### Eval results\n\n\nThe model was evaluated using the matched test set and achieves 0.90 accuracy.\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.", "### BibTeX entry and citation info\n\n\nIf you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.", "### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn", "### Debugging and issues\n\n\nNote that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.\n\n\nModel Recycling\n---------------\n\n\nEvaluation on 36 datasets using MoritzLaurer/DeBERTa-v3-base-mnli as a base model yields average score of 80.01 in comparison to 79.04 by microsoft/deberta-v3-base.\n\n\nThe model is ranked 1st among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023.\n\n\nResults:\n\n\n\nFor more information, see: Model Recycling" ]
[ "TAGS\n#transformers #pytorch #safetensors #deberta-v2 #text-classification #zero-shot-classification #en #arxiv-2006.03654 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "#### How to use the model", "### Training data\n\n\nThis model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs.", "### Training procedure\n\n\nDeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters.", "### Eval results\n\n\nThe model was evaluated using the matched test set and achieves 0.90 accuracy.\n\n\nLimitations and bias\n--------------------\n\n\nPlease consult the original DeBERTa paper and literature on different NLI datasets for potential biases.", "### BibTeX entry and citation info\n\n\nIf you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.", "### Ideas for cooperation or questions?\n\n\nIf you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or LinkedIn", "### Debugging and issues\n\n\nNote that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues.\n\n\nModel Recycling\n---------------\n\n\nEvaluation on 36 datasets using MoritzLaurer/DeBERTa-v3-base-mnli as a base model yields average score of 80.01 in comparison to 79.04 by microsoft/deberta-v3-base.\n\n\nThe model is ranked 1st among all tested models for the microsoft/deberta-v3-base architecture as of 09/01/2023.\n\n\nResults:\n\n\n\nFor more information, see: Model Recycling" ]